http://www.av-comparatives.org/

Discussion in 'other anti-virus software' started by FYI, Feb 25, 2004.

Thread Status:
Not open for further replies.
  1. FYI

    FYI Guest

    http://www.av-comparatives.org/
     
  2. meneer

    meneer Registered Member

    Joined:
    Nov 27, 2002
    Posts:
    1,132
    Location:
    The Netherlands
    :D
    So, in order to get tested, it must have been tested o_O
    Perhaps this site is okay, but this makes it look stupid :doubt:
     
  3. Trans

    Trans Registered Member

    Joined:
    Dec 30, 2003
    Posts:
    76
    If the tests there have any value,
    interesting results for Panda and NOD...
     
  4. FanJ

    FanJ Guest

    Any info on the test set?
    Any info with respect to those statistics about whether in-the-wild and/or zoo-viruses?
    Any info about how the test was performed?
     
  5. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,819
    Location:
    Innsbruck (Austria)
    Hello,

    yes, there were preliminary tests in order to discover which products can be get tested, as FOR EXAMPLE not every company would like to submit their products if they know that they will get poor results. Other reason is that it would require also to mcuh time to test products that does not fulfill the conditions (those are just 3 of about 10 conditions).
    I was also a bit impressed from the results of Panda, I expected lower results, but it seems that Panda increased in last months their detection rate quite much. (I tested Panda Platinum Internet Security; Panda Titanum and Platinum AV has a bit lower detection rates as those does not detect for example Hacker/virus tools, spyware etc.).
    I expected higher results for NOD32, as I used also the /AH option. NOD32 apparently removed from their databases some old dos viruses, I think this is also one reason why they got such a low total result; I am sure ESET will do their best in order to get better results next time. Anyway all tested products are already very good products; in my opnion, NOD32 has still the best heuristic of all, but I can say this for sure only after the next test.
    Regards,
    IBK

    P.S.: I do not visit often this forum, so I will not be able to respond to all questions here.
     
  6. MikeBCda

    MikeBCda Registered Member

    Joined:
    Jan 5, 2004
    Posts:
    1,627
    Location:
    southern Ont. Canada
    Interesting. I completed (or tried to) their survey while I was there, and someone should get the word to them that it's very poorly designed.

    If you tick "no" to the question about whether you ever got a virus even with a-v running, and naturally skip the following question about how you got the virus, it kicks back to you that there's an answer missing.

    And the wording about very old DOS viruses ("Do you pretend you're protected ...?") was a little insulting.
     
  7. nameless

    nameless Registered Member

    Joined:
    Feb 23, 2003
    Posts:
    1,184
    Does anyone ever, like me, wonder if the people who carry out these tests notify the vendors about the malware their products missed? Or, if they don't... why they don't?

    Or, how about why every test rates KAV very, very well, yet KAV doesn't have a stellar VB100 track record? I know that the problem with KAV where VB100 is concerned has to do with false positives, but this tells me that some aspects of the VB100 test are pretty damned stupid.
     
  8. Kay Maier

    Kay Maier Guest

    Good day!

    The difference with these tests is the following one: Virus Bulletin publishes their testset (names) before they perform tests. Therefore, Eset can add all of these viruses easily. The testset VB uses is also rather small.

    Other testers, such as Clementi or Marx do not publish the testset in advance and it includes much more samples. Virus Bulletin has only 400 different viruses in their testset in 10.000 files or so. Clementi and Marx seems to use much more viruses for test (50.000? in 250.000 infected files?). At least from Marx I know that he sends out a collection of all missed viruses after the test is finish and the results are final.

    Regards,
    Kay
     
  9. illukka

    illukka Spyware Fighter

    Joined:
    Jun 23, 2003
    Posts:
    633
    Location:
    S.A.V.O
    yeah vb only tests if a scanner has a signature for those viruses that currently are in the wild and if that signature is strong enough not to produce false positives. it has nothing to do with overall detection abilities.
     
  10. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,819
    Location:
    Innsbruck (Austria)
    I follow more or less the same principles as Marx has and the same fair conditions as VB. ;)

    About the survey: Yes I know, it has some problems; but believe me, I got such a headache to make it, that I decided to make it online as it is, as otherwise I would had to delay for another week the publishing of the results. So, please accept the survey as it is, I am not going to spend more time to fix it (there are more errors in the survey questions than you named). Anyway if you read on the top, you must fill out all questions, so i do not understand why you wonder ;)

    P.S.: NOD32 is a very good scanner, even if you maybe think that the results are low. NOD32 protects you against all ITW-viruses and is a fast scanner. As I said, the results showed are just one little aspect of a scanner, there are still many other interesting aspects that makes NOD32 very attractive. :)
     
  11. Bender

    Bender Guest

    Kay Maier ----- ROFL
     
  12. sir_carew

    sir_carew Registered Member

    Joined:
    Sep 2, 2003
    Posts:
    884
    Location:
    Santiago, Chile
    IBK,
    ESET isn't delete any viruses/trojan/exploit (malware) from their database even if it's a very old DOS virus.
     
  13. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To Bender from Firefighter!

    Kay Maier isn't so wrong what he/she wrote. Here are those polymorphic virus names in VB 6-2002 and 6-2003, can u see there some big differencies?

    Polymorphic Viruses VB 6-2002 winXP (42 different virus names):

    ACG.A 174, ACG.B 90, Alive.4000 500, Anarchy.6503 500, W97M/AntiSocial.E 50, W97M/AntiSocial.F 20, Arianna.3076 500, Baran.4968 500, XM/Compat.A 50, Code.3952:VICE.05 500, Cordobes.3334 500,
    Cryptor.2582 500, W32/CTX 84, Digi.3547 500, DSCE.Demo 500, W32/Fosforo 61, Girafe:TPE 500,
    Gripe.1985 500, W97M/Groov.B 50, Mad.3544 500, W32.Magistr. 17, Win95/Marburg.8590 744,
    Groove and Coffeeshop 500, MTZ.4510 500, Natas.4744 500, Neuroquila.A 500, Nightfall.4518.B 500, One_Half.3544 500, Pathogen:SMEG.0_1 500, PeaceKeeper.B 500, Russel.3072.A 500, SatanBug.5000.A 500,
    Sepultura:MtE-Small 500, Win95/Sk.7972 7, Win95/Sk.8044 21, SMEG_v0.3 500, Spanska.4250 500, W97M/Splash.A 100, W97M/Service.A 100, Tequila.2468.A 500, Uruguay.4 500, W32/Zmist.D 43


    Polymorphic Viruses VB 6-2003 winXP (43 different virus names):

    ACG.A 174, ACG.B 90, Alive.4000 500, Anarchy.6503 500, W97M/AntiSocial.E 50, W97M/AntiSocial.F 20, Arianna.3076 500, Baran.4968 500, XM/Compat.A 50, Code.3952:VICE.05 500, Cordobes.3334 500,
    Cryptor.2582 500, W32/CTX 84, Digi.3547 500, DSCE.Demo 500, W32/Etap 29, W32/Fosforo 61, Girafe:TPE 500, Gripe.1985 500, W97M/Groov.B 50, Mad.3544 500, W32.Magistr. 17, Win95/Marburg.8590 744,
    Groove and Coffeeshop 500, MTZ.4510 500, Natas.4744 500, Neuroquila.A 500, Nightfall.4518.B 500, One_Half.3544 500, Pathogen:SMEG.0_1 500, PeaceKeeper.B 500, Russel.3072.A 500, SatanBug.5000.A 500,
    Sepultura:MtE-Small 500, Win95/Sk.7972 7, Win95/Sk.8044 21, SMEG_v0.3 500, Spanska.4250 500, W97M/Splash.A 100, W97M/Service.A 100, Tequila.2468.A 500, Uruguay.4 500, W32/Zmist.D 43


    Yes, there was only W32/Etap 29 new in VB 6-2003 polymorphics that wasn't in 6-2002 and the VB 8-2003 NetWare has exactly the same virusnames that were in VB 6-2003 WinXP polymorphics!

    You can check my lists from official VB published PDF issues 6-2002 and 6-2003 if u want.

    Besides that "in the Wild" list is more or less known before VB is testing those av:s!


    PS. There were actually some 1 600 different virus names in VB 6-2003 WinXP test that measured macro, polymorphic and standard viruses and those files were in about 21 000 samples, so that Kay Maier's 400 and
    10 000 may be a bit inaccurate, but most of them all were more or less known before the test.


    "The truth is out there, but it hurts!"

    Best regards,
    Firefighter!
     
  14. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,819
    Location:
    Innsbruck (Austria)
    Thanks for letting me know. When I will have time I will check whats up and report directly to ESET if I find a reason why about 6 months ago NOD32 appaered to detect more DOS viruses than yet.
     
  15. mrtwolman

    mrtwolman Eset Staff Account

    Joined:
    Dec 5, 2002
    Posts:
    613
    with all the respect to the author of the test, based on the information on the test published on the site i'd label it as not too proffesional.

    My reasons:
    1. only AV with one scan engine can take part in test. Why?
    2. there are no data on test sets. No data on origin of the viruses, no data on their ability to replicate... Kinda in the old good virus.gr test style.
    3. what is best possible setting for an antivirus? As long as the exact settings are not published, it gives no relevent information
    4. there is no way to confirm the test results

    I see some problems with methodology here....
     
  16. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,819
    Location:
    Innsbruck (Austria)
    Hello,

    1. because I tested primary the scan engines not the products. F-Secure or AVK would of course get better results than the others, as they use for example the F-prot engine and the KAV engine.
    2. Data of test sets is known to the AV producers. Normal users does not need such information and of course we are able to replicate samples. AV companies agreed to be tested be us, we are not comparable with amatorial underground tests like virus.gr ; anyway anyone can prefer the tests that he better like.
    3. the best possible settings are the best possible settings; for example all files, deep heuristic etc. (best settings are also known to the AV producers and confirmed by them if unclear).
    4. AV companies are in contact since long time and test results can be confirmed as they can check theirself the results.
    Methodology follows the standard testing procedures, like VB or Marx does. Anyway I will talk to some people about the methodology most probably at the next EICAR conference in Luxembourg. See you there!

    Regards,
    Andreas
     
  17. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To everyone from Firefighter!

    What do u think of the value of certain test when it has only 1 (uno) new virusname in one of their detecting categories 14 months later? In my vocabulary those tests are some kind of facelifts, that has been done to appear more acceptable among the ignorants!

    In other words, VB is testing 14 months later only that, does those tested av:s detect 1 new virus or not in their polymorphic sample database.

    Where is the ROFL now?


    Best regards,
    Firefighter!
     
  18. mrtwolman

    mrtwolman Eset Staff Account

    Joined:
    Dec 5, 2002
    Posts:
    613
    With regard to above statement - maybe the "scanning engine test" would be more appropriate than antivirus test....

    Anyway, the statement "all samples are replicable and their replicability was verified it in this and that way" would improve the impression of the naked numbers presented in the tables. Another statement in the style of "list of tested files available upon request" would boost image of the test too.

    Nevertheless, couple of bytes of extra typed characters explaining the setting would not cause any harm.

    I just wanted to point to the matter of fact that known methodology of test will greatly boost the credibility. One of reason why are VB test so respected is that whoever who has enough know how is able to verify test results on his own.

    Maybe in the future you could add hard poly detection test - tent of thousands of poly vurises in huntreds of generations. The results sometimes could be very very interesting.... :cool:
     
  19. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To mrtwolman from Firefighter!

    U wrote,

    "One of reason why are VB test so respected is that whoever who has enough know how is able to verify test results on his own."

    In my mind there is a bit difference between respected and reliable. VB may be reliable but to test one new virus in polymorphic category 14 months later does not make VB as respected in my vocabulary!


    "The truth is out there, but it hurts!"

    Best regards,
    Firefighter!
     
  20. mrtwolman

    mrtwolman Eset Staff Account

    Joined:
    Dec 5, 2002
    Posts:
    613
    You have right to have your opinion as well as I have right to have mine. With all the respect, there is no test on this planet better than that of VB. Mine explanation of respected is a bit different - there was no "issue" with VB test since the very beginning of the testing. Sure, tests of VB are limited in some aspects but are bringing valuable informations. Are replicable. Have clear methodology. Use real replicating samples. Are performed in the same way all the times. Just try to mention any test able to hold the same level with VB :cool:
     
  21. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To mrtwolman from Firefighter!

    U wrote,

    "Sure, tests of VB are limited in some aspects but are bringing VALUABLE informations."

    So, what kind of value does that kind of test have if an av detects one (1) polymorphic virus 14 months later or not?

    Detected YES or NO , but what can u assume about the av:s reliability - nothing!

    I'm still ranking that kind of VB tests at least as missleading!


    "The truth is out there, but it hurts!"

    Best regards,
    Firefighter!
     
  22. Bender

    Bender Guest

    Firefighter ----- a ROFL for you too! I will not waste my time trying to explain where you are wrong, because your historic posts tell me you do not want to know, only to argue with a closed mind.
     
  23. Technodrome

    Technodrome Security Expert

    Joined:
    Feb 13, 2002
    Posts:
    2,140
    Location:
    New York
    Lets stay on topic folks. ;)



    tECHNODROME
     
  24. mrtwolman

    mrtwolman Eset Staff Account

    Joined:
    Dec 5, 2002
    Posts:
    613
    Before I will rest my case in this discussion, I kindly ask you to answer to my questions:

    1. Can you name some other antivirus test that matches that of VB ?
    2. Can you name one new polymorphic virus which was ITW in last 14 months ?

    And now I rest my case.
     
  25. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To Bender from Firefighter!

    My history in here does not have anything to do with this topic. I am happy to see some errors in my post just now but u can't just find them!

    Besides, even I can make mistakes occasionally, but I admit those when some has real facts to show to me. I'm only now showing these things like the others were showing them to me lately when I said some other av-tests but VB as thrustworthly!


    "The truth is out there, but it hurts!"

    Best regards,
    Firefighter!
     
Thread Status:
Not open for further replies.