Is antivirus testing corrupt?

Discussion in 'other anti-virus software' started by bellgamin, Aug 9, 2007.

Thread Status:
Not open for further replies.
  1. bellgamin

    bellgamin Registered Member

    Joined:
    Aug 1, 2002
    Posts:
    8,102
    Location:
    Hawaii
    Here's an interesting article: "Is AV product testing corrupt?" By Robin Bloor, IT-Analysis.com, August 9, 2007

    A short extract from the article's comments concerning independent antivirus testing organizations, the *malware samples* they use, and the fact that all they mainly test is signatures/blacklists:
     
  2. pilotart

    pilotart Registered Member

    Joined:
    Feb 14, 2006
    Posts:
    377
    Yes, "...Major magazines report comparison statistics, but which do you trust?" I like to start here (Wilders Security).

    another quote from that report:
    and my dad used to say "Figures don't lie, but liars figure". [also from Mark Twain]

    Likely the worst threat for many average users would be all the crap-security on Google/Yahoo/etc. sponsered ads that 'finds' bogus threats,
    just to take your money and corrupt your system.

    See: the short list of Trustworthy Anti-Spyware Products
     
    Last edited: Aug 9, 2007
  3. Miyagi

    Miyagi Registered Member

    Joined:
    Mar 12, 2005
    Posts:
    426
    Location:
    None
    "Who can you trust"? - That's my Inspector and all the AV-Experts here at Wilders including IBK. :)
     
  4. Sjoeii

    Sjoeii Registered Member

    Joined:
    Aug 26, 2006
    Posts:
    1,240
    Location:
    52?18'51.59"N + 4?56'32.13"O
    Interesting story thanx
     
  5. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
  6. dan_maran

    dan_maran Registered Member

    Joined:
    Aug 30, 2004
    Posts:
    1,053
    Location:
    98031
    Good old Bloor, I read this yesterday this guy never ceases to amaze me!

    In order for this independent testing body to be truly independent it would need to be international and monitored. Maybe Mr. Bloor can plug this to the UN.... <Sarc>
     
  7. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    No it isnt. IBK still rules at the top but you really cant just wipe the others away either. You have to look for that common thread. Lets say Kasperskys is number 2 at IBKs, and 1 somewhere else, and 2 and then 3 at different testing sites.. What it all tells you is that the others are not full of crap but that Kasperskys is a very good AV. Same for those that rate at the bottom. It really doesnt matter who is 1 or 2 or 3. But by having others test, it shows and supports the better products and IBKs findings.:cool:

    There will always be one AV that just doesnt make sense with most sites, but again, look at the bigger picture, not why yours isnt always frigging number 1.
     
  8. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    Best comment is from Bontchev:

    By Dr. Vesselin Bontchev
    Posted Thursday 9th August 2007 11:56 GMT

    It seems that Mr. Bloor is simply incapable of posting anything that I can't disagree with - even though most of this particular article isn't even his. :)

    But, basically, yes, AV tests suck. Big time. The problem, however, is not that the testers fiddle with the statistics or that AV companies submit "special" samples. The problem is competence, or more exactly the lack of it.

    Testing an AV product *properly* is an *extremely* difficult job. Although we occasionally see tests that are not so bad (e.g., Virus Bulletin's, VTC Hamburg's, etc.), *none* is what I would call "excellent" and the vast majority of them are terribly bad. I can confidently say that nobody of those currently testing AV products has sufficient competence, resources, time and manpower for the job. Those who have the competence are world-class AV researchers - and they have been snapped up by the AV companies long time ago and, as such, cannot do independent AV product testing due to conflict of interest.

    Describing how to conduct AV tests properly is waaay outside the scope of this simple comment (I recently participated a 2-day workshop on the subject where my speeches covered only a small aspect of the job) but basically you need:

    1) Proper malware collection. This means about half a million currently known different malicious programs and a testing team who is able to analyze every single one of them, figure out which ones are viruses (and which samples are simply non-working crap), replicate them, figure out which samples contain the same virus (possibly polymorphed), classify and order them properly. Wrong shortcuts currently used by incompetent testers: put in the collection anything that a scanner reports as something or use only the set provided by the WildList Organization.

    2) Testers who understand exactly how every single tested product works, what are its components and how to test them properly. And, believe it or not, the different AV products work in vastly different ways. Wrong shortcut currently being used by incompetent testers: just test the on-demand scanner component of the product.

    3) Lots of time, people (competent ones!) and disk space (terabytes), plus helper tools that you develop in-house to facilitate some of the tasks.

    Nobody currently has the capability to do all of the above properly - and I mean NOBODY. That's why the AV tests all suck.

    To Steve Browne: you can already do most of what you want in WinXP+NFTS. SU-ing only for installation, read-only directories, browsers that are applications, monitors, private files, limitation on what is executable. Problem is, in order to do it, you have to be a competent Windows sysadmin (which you obviously aren't). And if you put a Joe Luser in charge of adminisering a Linux box, he will screw it up just as surely as a Windows box. The problem is that Windows is used en-masse, while Linux is used by a few tinkerers who know what they are doing. The mass of people are *not* competent sysadmins. They will screw up *whatever* they are forced to administer, no matter what OS it is running. Make Linux as widespread and as easy-to-use as Windows (*both* factors are essential) and the malware problem will remain the same - except that it will be malware for Linux.
     
  9. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    well then, I guess there is the world, and then the real world.:eek:
     
  10. beads

    beads Registered Member

    Joined:
    Jun 1, 2005
    Posts:
    49
    Figures lie and liars figure... Go figure.

    I feel so cheap and used by these dirty magazine people!
     
  11. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Questions of technical competence certainly always lurk in the background. However, as a practical matter, even if competence were not in question, each of the points raised really wouldn't be fully resolved due to practical demands of operational logistics.

    Given that, the obvious question is "What then?" At their best, the available tests imperfectly quantify an incomplete assessment of product performance. That's why you have to look at them in broad strokes, as some such as IBK continually note.

    Perhaps it's because, on a daily basis, I have to deal with, makes sense of, and base expensive commercial decisions on experimental data that has as many issues as a typical AV test that I really don't view them as all bad. As with the situations that I normally deal with, you have to develop a sense for how discerning the data really is, make note that some differences are within the intrinsic noise of the test and ignore them, and combine the noisy but available data with a soundly based fundamental analysis to use the results for general guidance.

    If one is hanging on the detection differences of a few percent or less, you're probably hanging onto the wrong thing. If you view the numbers in a broad context and as just one aspect of product performance, test results can be of use, even with all the logistical problems noted above and a few of potential questions of technical competence thrown in to boot.

    Blue
     
  12. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    [“Given that, the obvious question is "What then?" At their best, the available tests imperfectly quantify an incomplete assessment of product performance. That's why you have to look at them in broad strokes, as some such as IBK continually note.”]

    I recognize, to some extent, the difficulty in performing accurate and meaningful tests. Even with that being true, I have some degree of confidence in some tests, and especially IBK’s. However imperfect they may be, they do provide an indication of the capabilities of the various programs. The fact that they are imperfect does not negate the results in my view, and I will continue to put some degree of faith in them when choosing an AV.
    I wonder if the true experts, and that term is not meant to be sarcastic, would have us just throw darts and hope the application is good enough to provide meaningful protection?

    [If one is hanging on the detection differences of a few percent or less, you're probably hanging onto the wrong thing. If you view the numbers in a broad context and as just one aspect of product performance, test results can be of use, even with all the logistical problems noted above and a few of potential questions of technical competence thrown in to boot.]

    That makes sense to me, and there are other factors than detection rates. The most important being the ability to operate with a minimum of problems on a particular system. If it won’t run it is not any good.

    I wish there were a good way to test the prevention aspect but there does not seem to be a practicable way to do that. I would always rather malware would be intercepted and kept from my system than to detect and remove after it has installed itself.

    Many/most here have forgotten more than I’ll ever know about this subject, but I do know what operates well for me, and the only method I have of selecting an AV according to its detection and removal capabilities is to consider the tests conducted. I am selective as to what tests I pay attention to.

    Best Regards,
    Jerry
     
  13. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    And then there is of course the really real world....:blink:
     
  14. Zombini

    Zombini Registered Member

    Joined:
    Jul 11, 2006
    Posts:
    469
    Each product and their fanboys and customers trust the tester that put their product at #1.

    Trend's fanboys/customers trust Consumer Reports.
    Kaspersky's fanboys/customers trust PC World
    Symantec's fanboys/customers trust PCMag
    Zone trusts CNET
    :D
     
  15. mercurie

    mercurie A Friendly Creature

    Joined:
    Nov 28, 2003
    Posts:
    2,448
    Location:
    Sky over the Wilders Forest
    Forget the Mags. :thumbd: After filtering out the obvious fanboys and haters of certain products. I learn more about what is good from Wilders (and other places like them) then any other source especially the worthless "news stand" rags.
     
  16. si_ed

    si_ed Registered Member

    Joined:
    Aug 14, 2007
    Posts:
    54
    Or course, there is a significant difference between corrupt and incompetent. A corrupt test suggests that the results are biased in some way, whereas someone who is unable to test properly (or communicate his/her results accurately) is just a rubbish tester - not necessarily a corrupt one. In both cases the results may actually be useful in some way or other, but the methodologies are suspect, as are the overall verdicts. If the biased/stupid tester decides to outright lie, then of course the test results will always be useless.

    Even an 'ideal' test, which I don't think has ever been developed, may not produce results that match the personal experience of everyone here.

    As someone said above, if a number of tests from different organisations rate a certain vendor's products highly (for certain tasks), then it probably is better (at that one particular task). Testing every product thoroughly for each element of the protection it provides is a tough challenge. Being unable to produce such a test does not necessarily make the effort to test corrupt or incompetent, though.
     
  17. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    Most of the "Testers" are not even aware that they are incompetent. That's the problem. You have first to admit that you're doing something wrong (or to admit that you have to improve your knowledge) before some action can take place. It's funny, but especially the worst testers think they are good :D Some of them can't even tell you the difference between a worm and a virus. Let alone verifying samples (Damaged, not malicious, false positives etc)

    There is not even one AV Tester in the whole world who is *REALLY* good.
    Some come very close to good but that's it. Because their resources are limited. So it would take a very very long time to make a proper AV test. And when they would publish the results it would be already outdated.
    Why? Because he wouldn't be AV Tester anymore. He would work for some AV Company earning maybe twice as what he gets as AV Tester.

    Here's the deal: For a almost "perfect" AV Test you would have to verify EVERY SAMPLE by hand! And not by just someone who checks with other AV Scanner if something is detected. By someone who knows how to read a disassembling and understands malware groups and variants. Because *ONLY THEN* you can say that something is malicious or not. Using other AV Products for determining what is malicious or not is "cheating". It's widely used because as i said before nobody has the resources to manually verify every sample. And there the drama has its roots! If you automate something there will be always mistakes and flaws. Because some AV included something wrong (a clean file as a virus for example) another AV thinks oh if they detect that we have to include that as well (again: to save resources and not looking at the file just assuming that the other company knows why they included that!) and there we go - some AV Tester thinks the same way "if its detected by a lot of AV programs it must be a virus/malware". WRONG! WRONG! VERY WRONG!

    Just ask a AV Tester about the PE Filestructure. Including Section Headers, Import Table Addresses etc. Just make it simple and ask someone how to retrieve the offset to the PE Header. That's 0x3C by the way. Read that value there and you have it. Almost no tester knows this. THAT IS ESSENTIAL STUFF EVERYONE SHOULD KNOW AND EVERYONE SHOULD TELL YOU THIS IF U WAKE HIM UP AT MIDNIGHT ASKING FOR THAT. Without starting google. Without asking somebody else. Because that's so simple that almost every developer (regardingless if he's in AV industry or not) should know that.
     
  18. si_ed

    si_ed Registered Member

    Joined:
    Aug 14, 2007
    Posts:
    54
    I agree with pretty much everything you've just said. Personally I think that most flaws in AV testing is down to incompetence rather than a conspiracy of corruption, which is what this thread was originally about.

    I agree that using AV scanners is a poor way to test a sample base. There is an alternative method to yours that is easier but not quite so accurate (making it not the perfect test, I admit). That is to run the malware on a lab machine (not a virtual one, I hasten to add) and monitor network traffic and disk writes. A malicious file, in almost all cases, will attempt to write something to the disk and will most probably access the network at some stage.

    One problem with this is: when will it try to achieve these goals? Immediately; in half an hour; when you try to log into an online bank; or at a certain date in the future? Of course, again as you say, anyone skillful enough to set that up properly, or even better is able to dig into IDA properly, will most likely prefer the salary of an AV researcher over that of a journalist.
     
  19. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    That is not an option. That is only "suitable" if you know with what kind of malware you are dealing. As you mentioned yourself there are dependencies for malware. Some are event triggered. Banking Trojans for example. There are also downloaders which are downloading every specific date/time. You cannot determine that with a "trial & error" method. You have disassemble them and to find the Loop and the condition when they perform the malicious action. Same for some fileinfectors. They simply don't infect anything if they are running in a empty subdirectory or if there are no files which matching their "expectations" (Eg. specific filenames etc) and that you can only determine successfully by having a look at the asm source.

    Btw do we know each other? Are you the Ed from The Register?
     
  20. si_ed

    si_ed Registered Member

    Joined:
    Aug 14, 2007
    Posts:
    54
    No, I don't think we know each other and I don't work for The Register, although I do know El Reg's John Leydon quite well. My name is Simon Edwards.
     
  21. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    Thanks for that nice MSN Chat m8 see ya in london :D
     
  22. Q Section

    Q Section Registered Member

    Joined:
    Feb 5, 2003
    Posts:
    778
    Location:
    Headquarters - London & Field Offices -Worldwide
    At one time someone made a comment that a certain AV programme did not catch a virus on his computer. We asked for documentation and none ever appeared.

    Again we post this request:

    Does anyone have any documented evidence that the top-rated AV programme (rated by the top-rated AV testing organisation) has let a virus execute on their system?

    How does this relate to the topic of the thread you ask? If the top-rated AV testing organisation is either incompetent or corrupt then let us have some hard proof by showing which virus (not worm, Trojan or other non-virus malware) has executed on a system with the AV installed. In the mean time - while it is true there are inferior so-called AV testing organisations the top-rated AV testing organisation and the top-rated AV programme have yet to be shown to be less than expected by users and those who must recommend programmes.

    While we agree that having a standard to test AV programmes may be worthwhile we still have not yet heard of any proof that the top-rated AV programme rated by the top AV testing organisation is either incompetent or corrupt hence their (the top-rated AV testing organisation's) recommendations stand because so far they have not been proven to have been wrong, incompetent or corrupt.

    Thank you.
     
  23. njtrout

    njtrout Registered Member

    Joined:
    Aug 27, 2007
    Posts:
    7
    I do believe Mr. Bloor would not go so far out on his claims without having evidence, do you? It would be difficult to present the evidence in a public forum for legal reason, but this SHOULD raise the question in your mind about some of the most recent test results by an organization he mentioned. Yes, he holds the burden of proof, but I believe the doubt placed in everyones mind (that was so quickly defended) should get you to ask the testing organization questions.
     
  24. si_ed

    si_ed Registered Member

    Joined:
    Aug 14, 2007
    Posts:
    54
    I agree that he *should* have evidence. I don't see why it would be difficult to present this evidence, though, as long as it is true. In fact, you are more likely to run into legal problems when making public accusations if you do *not* provide evidence.

    It is always important to question the source of information, be it from AV testing organisations or from commentators such as Bloor. The burden of proof is on the information provider and no one should accept what they claim purely on trust, just because they *ought* to have evidence.

    As you say, charges of corruption definitely place questions in people's minds. And it is precisely for this reason that these types of claims should be supported by evidence. If you publically accuse a neighbour of being a pervert you will cause him all kinds of problems, so you had better have some evidence of this 'fact'. Inform the tax office that someone you know is evading tax and the question this will raise in the tax inspector's mind will also cause a lot of disruption for the accused. Again, it is better to have evidence before making accusations.
     
  25. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Actually, before Mr Bloor does that, he should probably spend some time to provide a story title that is more reflective of the content of the article. When you get down to it, his main point is that the view provided by some/all of these tests is corrupted, i.e. not reflective of field-use reality, as opposed to the tests themselves are purposely skewed in advance by corrupt testers (although he does make a brief passing comment on this as well).

    One main point of the Kirk article that seems to have crystallized Bloor's missive is that the straightforward on-demand challenge scan - which may have been reflective of a large fraction of the product's functionality in the past - doesn't probe additional features that AV vendor's have been adding over the years to assist in malware detection. As such, the tests don't probe overall product functionality

    Neither author touched on the point that maintaining testbed fidelity in an era of escalating volume of malware is a near impossibility for the operational reasons already mentioned above. The net result is that a user is left with the situation that a high % detection is probably a good result, while a lower % detection is not necessarily worse in real world use.

    Blue
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.