VB100 April 2010...

Discussion in 'other anti-virus software' started by King Grub, Apr 12, 2010.

Thread Status:
Not open for further replies.
  1. fax

    fax Registered Member

    Joined:
    May 30, 2005
    Posts:
    3,898
    Location:
    localhost
    I think you are missing the point.
    ZA and Kaspersky (Version 6 revision 4) failed for the same reasons

    Status: FAIL
    Failure reason: 1 wildlist miss
    Product name: Kaspersky Anti-Virus 6 for Windows Workstations
    More: April 2010 in full
    Review: Kaspersky Anti-Virus 6 for Windows Workstations on Windows XP

    Both use the same engine that is the same as KAV 2010 (= same signature). Probably a different set of AV-DAT was used or the heuristic settings of the enterprise version is different than the retail version (note that KAV 6 MP4 was released after KAV2010)

    I am referring to the RAP test where ZA is better than Kaspersky. Something else than the AV engine is tested otherwise it would not explain the better performance of ZA.

    Fax
     
    Last edited: Apr 15, 2010
  2. skokospa

    skokospa Registered Member

    Joined:
    Apr 1, 2009
    Posts:
    177
    Location:
    Srbija
    Take quotes from emisoft Forumand I completely agree with that thinking.

    Eugene Kaspersky said:
    ..the tests conducted by VirusBulletin (an industry publication) - I am sure that if I didn't include this, readers would ask why the tests and the resulting VB100% award hadn't been mentioned. Sadly, these tests are far from perfect. The test standards were developed in the mid-1990s and have barely changed since then. Antivirus products are tested using a collection of files infected by ITW viruses. The award is given on the basis of the test results. However, the ITW collection only contains between two to three thousand files - fewer malicious programs than appear in the wild in the space of a single month. Therefore, a VB100% award doesn't necessarily mean that a product really provides protection against all types of malware. It simply means that the product copes well with VirusBulletin's ITW collection, nothing more.

    Doctor Web sees the issues of the comparative testing as follows:
    1. Testing of an anti-virus for VB100% is based on In-the-Wild set of viruses which includes only malware capable of replicating itself which surely narrows the list of malicious programs used for the testing. As estimated by Doctor Web the In-the-Wild collection includes only 10 per cent of the total number of malware modern anti-viruses protect against.
    2. The above-mentioned criterion applied to In-the-Wild collection leaves out the large segment of the present-day malware – Trojans. The same applies to one of the gravest IT security issues of last 4-5 years, so called rootkits. No matter how good an anti-virus is at detecting Trojans which outnumber viruses manifold, mo matter what are its rootkit counteraction capabilities it will only get the VB100% upon a successful detection of several thousands of samples from the In-the-Wild collection. Alas, VB100% used as an ultimate benchmark by some marketing specialists and industry experts won’t show a user if an anti-virus is really efficient against Trojans.
    3. In order to address new challenges Dr.Web is developing as all other AV products. AV vendors have to deal with new technologies of virus-writers on daily basis which makes constant bringing of innovations into an anti-virus a must. And here regular updates of a virus database are not enough. The testing for VB100% doesn’t compare technical innovations of anti-viruses developed to counteract malicious programs that are never included the In-the-Wild collection.
    4. It’s not a routine scan of a collection of files that shows how good an anti-virus is. It is a malicious attack when malware is attempting to get to a computer or a computer has already been infected. Recent years saw numerous proposals to create tougher conditions for testing anti-viruses and assess them by their ability to cope with an active infection. An anti-virus can show astounding results detecting samples from In-the-Wild collection but users will never know if it is the same perfect when malware is running in the RAM and controls the system rather than stored on a hard drive. Neither the test compares curing capabilities of anti-virus products.

    The public only sees the raw results. Vendor so-an-so failed the test and missed so many infected files. One has to register with VB and pay for a subscription to obtain the full testing report. Will the average PC user, spend $175 USD yearly for VB's testing results? I think not. Instead they will rely solely on the very misleading publicly available raw data.

    VB should not be using the term failed, when it comes to the VB100 award. Vendor so-and-so did not receive the VB100 award, because they did not detect 100% of the virus samples with 0 False Positive detections. You detect 100% of virus samples with 1 FP, you fail. You detect 99.99% of virus samples with 0 FP, you fail. Making only raw results publicly available without further explanation is doing everybody a huge disservice, vendors and end-users alike.
     
    Last edited: Apr 14, 2010
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.