The danger of AV testing sites

Discussion in 'other anti-virus software' started by Bodhitree, Dec 20, 2012.

Thread Status:
Not open for further replies.
  1. PJC

    PJC Very Frequent Poster

    Joined:
    Feb 17, 2010
    Posts:
    2,959
    Location:
    Internet
    AMEN!!!
    Andreas, please, stop by here more Often (I know you are Extremely Busy...),
    because some guys here are Spreading FALSE Information Over!
    Even Worse, they Insist on it!
     
  2. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    For example, ITW in my country may not be ITW in your country. Regional malware distributions, surfing/browsing habits, what kind of work are you doing on the internet, etc. :)

    For example Kingsoft may do very badly on AV-C but may be pretty good in Asia. But then, if you're an Asian user, it doesn't mean you're just going to visit Asian web sites. Thus, Kingsoft failing any test is still bad for that product (that's the "threshold" I was talking about) :)

    AV-C, AV-Test etc. collect samples from all over the world and hence provide a good reference point - if a product gets STANDARD (i.e. passes) in these tests, one can be assured they are decent enough in real-world. Beyond that, the detection rate doesn't really matter as much as your support experience, cost-effectiveness, user-friendliness, bugs/compatibility, etc.

    This is what I am saying: testing has a limitation in that it establishes absolutes. While it is a good reference point, in the real world there aren't as many absolutes :)

    I'm saying this based on my real-world experience. And this isn't false info PJC, if you've been around on suspicious areas a lot you'd know pretty much all anti-malware products aren't that good for zero-day threats and most decent behaviour blockers work as they should. But the average guy has no need to worry about this :)
     
  3. Notok

    Notok Registered Member

    Joined:
    May 28, 2004
    Posts:
    2,969
    Location:
    Portland, OR (USA)
    Except that users tend to allow things they shouldn't -- most likely even advanced users! :)

    Anyway, good post Firecat :)
     
  4. PJC

    PJC Very Frequent Poster

    Joined:
    Feb 17, 2010
    Posts:
    2,959
    Location:
    Internet
    You failed to take into account Heuristics, Behavior Blocking...just to name a few...
    Since many years ago, the Signature-Based Detection
    has Not been the only way AV products protect against Malware.
    -Do you have BOTH the Professional Expertise and the Facilities required for these Tests?
    -Can you Handle Thousands of Malware Samples?
    -Can you Verify that your Malware Samples are Qualified?

    AV-C and AV-Test are Not perfect (BTW, Who is ?),
    but they are the Best AV-Testing Organizations
    the Security-Software Industry has to offer up to Now.

    Now, if someone can provide more Reliable Tests,
    then, we are all ears...
     
  5. Notok

    Notok Registered Member

    Joined:
    May 28, 2004
    Posts:
    2,969
    Location:
    Portland, OR (USA)
    Agreed!

    Andreas: maybe talk a bit about some of the considerations you make when selecting samples and generally doing the tests?
     
  6. PJC

    PJC Very Frequent Poster

    Joined:
    Feb 17, 2010
    Posts:
    2,959
    Location:
    Internet
    Your Real-World Experience, like my experience, is Subjective.
    AVC/AV-Tests are, by far, more Objective than
    what Individual (=Subjective) Experience is.
    I was Not referring to you.
    I've been around on suspicious areas for two decades...I know about 0-day Malware and Behavior Blockers.

    -I will Not try to diminish the work of AV-C and AV-Test, because I do Not use a Resident AV. (Many do it, here).

    -Even worse, I do Not try to cancel out the work of AV-C and AV-Test, because my favorite AV scored low. (Many do it, here).

    (To avoid any misunderstanding, I'm Not referring to you, Firecat.)
     
  7. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    Indeed, and that is why I said 90% can be better than 98%, because objective tests are just a reference point that tells me whether this product does the job or not. So, it's not necessary that for e.g. BitDefender is always better than AVG and hence I should buy BitDefender and not consider AVG :)

    AV-C and AV-Test are reliable given the constraints under which they test; but the reader should be well-aware what the results mean. The real certification is "standard". Beyond that things don't matter much at all (actually, this is true for any test in general).

    okay, but I clarified just in case :)

    These organizations do a very good job of giving users a good reference point for evaluating protection rates of a product; but I wish they had better means to also inform the less-aware user about how any product that passes is already very good to begin with. I know AV-C writes this in every comparative, but even so I see people go "Hey look <product X> scores 92% and <product Y> scores 99% so I'll buy <product Y> and dump my <product X>", but the thing is that the 4% difference does not necessarily mean any difference in real-world protection (like I said a lower-scoring product may actually do better in some scenarios) :)

    Heck, I use products that are average at best in AV-C's tests and they are all doing the job just as well as the highest scorers. :D
     
  8. Notok

    Notok Registered Member

    Joined:
    May 28, 2004
    Posts:
    2,969
    Location:
    Portland, OR (USA)
    I personally tend to think that an AV that can reliably pass the VB100 is generally good for anyone that's not a particularly high risk user. As Firecat says, an AV that gets an 'average' score in these tests can still do just as well in the real world. With tens of thousands of new pieces of malware being pumped out every day, overall detection rates could vary by the hour.

    One of the big things that I look for personally is response times. If a cloud AV misses the first infection seen, but detection is added quickly before a hand full of computers are infected, then that's a lot better than some of the other companies that will take long enough for thousands of computers to be infected before detection is added.

    We certainly see a lot of that here on Wilders, but Wilders is for enthusiasts; we have fun checking out these products, adjusting security strategies, and so on. I think that a lot of times these tests just give people an excuse to switch things around :) (And there's nothing at all wrong with that! Learning and experimenting is a great thing for anyone to do.)
     
  9. Q Section

    Q Section Registered Member

    Joined:
    Feb 5, 2003
    Posts:
    778
    Location:
    Headquarters - London & Field Offices -Worldwide
    What is interesting is to find different results among the different testing organisations. If each conducts tests which would be valuable to the end user then it would seem we would find similar results but instead we find very varied results based on different testing methods and philosophies, correct?

    To whom are we (this thread) referring? Some of the organisations including but not limited to the following:

    Anti-Malware Test Lab
    AV Comparatives
    AV-Test
    ICSA Labs
    iMPERVA (some testing)
    NSS Labs
    Virus Bulletin
    West Coast Labs

    There are several magazines who also purport to test antivirus programmes.

    AMTSO - a para-organisation who is attempting to establish legitimate standards for testing (who are themselves having challenges to set Anti-Malware testing standards)

    The above were posted as a resource for those who may find the references useful.
     
  10. Q Section

    Q Section Registered Member

    Joined:
    Feb 5, 2003
    Posts:
    778
    Location:
    Headquarters - London & Field Offices -Worldwide
  11. Anth-Unit

    Anth-Unit Registered Member

    Joined:
    Oct 13, 2006
    Posts:
    108
    There was an article in the times about the recent Chinese attacks on the NYT. They were using Symantec enterprise product as antivirus. Here's what they said:
    "Over the course of three months, attackers installed 45 pieces of custom malware. The Times — which uses antivirus products made by Symantec — found only one instance in which Symantec identified an attacker’s software as malicious and quarantined it, according to Mandiant."

    It seems to me a little bit of a silly debate. The same people that bother reading these tests are probably the ones least likely to get the type of malware that these tests include as a data set. I'm far more interested to see how these security suits perform in atypical scenarios like the one above, than something I might download from fakeav.org.
     
  12. Q Section

    Q Section Registered Member

    Joined:
    Feb 5, 2003
    Posts:
    778
    Location:
    Headquarters - London & Field Offices -Worldwide
    Some additional thoughts of features/abilities necessary for good protection not covered by security software and hence lab testing:

    The security programme or "security suite" (if one uses a suite) may also have a feature to know and report if a computer has the latest Microsoft etc. Updates. This is being done already by at least one antimalware programme. If this was also done to report the lack of the latest versions of Flash, Java and other browser "enhancements" that are commonly used and are also very common vectors for malware to enter then neither heuristics nor a malware signature database would be used yet these features would go quite a long way toward simple user security (if the user attended to updates right away if they do not use automatic updates). To also check if older versions of the above are installed and should be uninstalled would be again a very valuable feature but no security software does this as far as is known commonly and is not something tested by the labs.

    Additionally - how about some security product or suite and testing thereof that checks all installed browser toolbars and reports good, bad and questionable installs? These installed toolbars are also very common vectors for malware infections or at the very least privacy concerns. We have not seen any lab testing showing any security product that covers all the above.

    These are several reasons why some lab's testing may give a pleasant score for a product but if the user only has that one product installed and kept up to date then it would be no wonder why their machine would be found soon with malware running on it.

    What about the idea of privacy versus security? These two terms are related and we tend to lump them together but are these products being tested by the labs for purposes of only security and not privacy issues as well?

    We do not wish to say careful testing by these labs is of no benefit. On the contrary. Their testing results are only a portion of the whole picture - the results of which one cannot use logically to ascertain which security software would be the good one to install. The ideal scenario would be to have comprehensive features found in two or three programmes running on one's computer and that the lab's testing is far more comprehensive in testing for privacy and security across all the most common vectors of security breaches instead of the limited testing being done (which partially reflects the lack of more security features in the programmes themselves). Two or three programmes would be because of the idea of "layered security" so that if some really bad malware takes out a module or a portion of or the whole security programme then one would not be defenseless because they would have additional resources running and hopefully by these would they escape a complete unknown breach.

    More testing if done more comprehensively is a good thing. Testing with more than one security programme running is a good thing. (Of course this is not meant to say to have more than one antivirus running at the same time but two or three programmes such as an antivirus, a separate HIPS from another vendor and a good firewall is a good example.) This might play havoc with the lab's billing for testing and/or publishing results but the users would be all the more wiser and better able to know how to secure their computers much better. Right?

    These are just some of the reasons that one cannot use just the combined lab testing results and find "a good programme" to use. Any thoughts on these points?
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.