AV-Test.org march test results

Discussion in 'other anti-virus software' started by Valentin_Pletzer, Mar 10, 2008.

Thread Status:
Not open for further replies.
  1. Ngwana

    Ngwana Registered Member

    Joined:
    Jul 5, 2006
    Posts:
    156
    Location:
    Glasgow, United Kingdom

    If anything should be concluded from AV comparative tests is that:

    1.If ever one needs an Antivirus, get one that does not crash your system and update it regularly.

    2.It does not have to be the highest ranked, as that will always change. However a product that consistently performs poorly in different tests or comparatives must be given a second thought.

    3.Technicalities ad test methods used are not nearly as important as the samples used. A real life analogy is that it does not matter whether you caught a virus sitting up or standing down, what matters is you got infected. An AV must find a virus, no matter where or how it is concealed in the test method.

    4.The issue is not to trust the ‘results’, but whether they results can be explained and made useful. You do not have to be an AV expert to tell if results are rubbish. Pay attention to what the tester complains about, it is easier to smell bias or ignorance.

    5.The other problem is that almost all AV vendors now claim their products can detect ‘anything’, which makes it difficult for testers as they end up with a collection of meaning samples of malware. What do you do if a particular AV product is improving in detecting Antispyware but going down in finding rootkits? Or even highest ranked in finding polymorphic viruses but not so good in detecting Trojans? Unless the tester makes it crystal clear what he/she tested, then the ranking is not helpful.

    6.Of course make sure your favourite AV product is high in the ranking, if not ‘complain’. :rolleyes:
     
  2. egghead

    egghead Registered Member

    Joined:
    Aug 27, 2005
    Posts:
    443
    Location:
    The Netherlands
    and that history is.........?
     
  3. Stefan Kurtzhals

    Stefan Kurtzhals AV Expert

    Joined:
    Sep 30, 2003
    Posts:
    702
    The retro part of the test looks very strange to me.

    BD, F-Secure, Panda, Eset, Sophos before WebWasher? WebWasher has the most aggressive detection I know. No way in hell the other products can have higher detection.

    I guess the test set is full of those dumped/unpacked garbage that is floating around in various collections recently. Or av-test used default out-of-the-box settings. But then, WebWasher uses Avira with heuristic level 3 by default.

    No sorry, but this really doesn't add up.
     
  4. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    @Stefan: no, I think the products are listed there in alphabetical order. It does not say which of the products with ++ detect more or less than other products getting ++.
     
  5. Stefan Kurtzhals

    Stefan Kurtzhals AV Expert

    Joined:
    Sep 30, 2003
    Posts:
    702
    Right, then it is likely that default settings were used - which means level 2 heuristics and no SPR, APPL, PCK detections for Avira (but WebWasher).
     
  6. the insider

    the insider Registered Member

    Joined:
    May 25, 2005
    Posts:
    151
    true or false ... AVAST! is doing great and that's the main thing ... (for me ;) )
     
  7. Valentin_Pletzer

    Valentin_Pletzer Registered Member

    Joined:
    Jun 19, 2007
    Posts:
    11
    @Stefan
    IBK is right. The products are in alphabetical order, since there is no more data.
     
  8. Stefan Kurtzhals

    Stefan Kurtzhals AV Expert

    Joined:
    Sep 30, 2003
    Posts:
    702
    So what is the purpose of "obscuring" test details? Why not give the actual numbers?
     
  9. Mele20

    Mele20 Former Poster

    Joined:
    Apr 29, 2002
    Posts:
    2,495
    Location:
    Hilo, Hawaii
    I think it was around the time you joined as I still had NOD32 so maybe you joined after the incident. Andreas Marx has the dubious "honor" of being one of a very small number of members who have been forever banned from Wilders for serious infractions of the TOS. I watched it happen. You do at least know about Eset's stormy relationship with Andreas and what Andreas tried to do to Eset? If not, go read the NOD32 forum from about three years ago.
     
  10. kinwolf

    kinwolf Registered Member

    Joined:
    Oct 19, 2006
    Posts:
    271
    You know what I personally hate from these tests? It's that we are only given a symbol or a number. What I'd like to see is the actual scan data. Yes it would be voluminous, but THAT would be real information imho. Like, are the missed sample ZOO sample? Are they old virus not in the wild anymore, like Win95 viruses. Are some sample simply harlmess file inside a known cracked packer? What is in the script virus category(av-comp), is there really new script virus coming out? etc.

    I bet that if the test set was more open, those garbage file would eventually be weeded out.

    Seeing a ++ or 98% might be nice to see, but it the end it doesn't cut it for me. I am always left on my appetite. Not much analysis can be done with a simple number or symbol :(

    Is that new AV testing alliance, amtso, supposed to have more open testing procedure?
     
    Last edited: Mar 11, 2008
  11. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    That's indeed somehow "strange" regarding the proactive results. What i *suspect* is that once again mostly files with the supposed function to trigger *behavior blockers* were used. In that way a "normal" heuristic doesn't look that good. I mean sure, heuristic will never reach the perfect score, no matter what product (otherwise we wouldn't need any signatures should be selfexplaining btw) but that's somehow really low results as shown. Especially the fact that the false positives are almost "too good to be true" makes me wonder. I do know *for sure* that our eldorado heuristic is aggressive and would have triggered more than only "one" false positive. Same for Avira. Almost same for everyone...
     
  12. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    That's not quite "right". You have to understand the internal organization of ESET before you can judge that. I can't (and i won't) explain that since i worked there and it would be unfair to speak for them *now*. You have *always* some "problems" between testers/producers. The problem is not the problem as such, the problem is HOW U DEAL with that problem. Period. So don't lets point always at that old thread, believe me, by now it's "obsolete".
     
  13. egghead

    egghead Registered Member

    Joined:
    Aug 27, 2005
    Posts:
    443
    Location:
    The Netherlands
    Thanks. I did not know that.:eek: Will take a look into it.
     
  14. DavidON

    DavidON Registered Member

    Joined:
    Mar 7, 2008
    Posts:
    19
    Location:
    North Island
    does anyone know which version of AntiVir was tested in this av-test ?
     
  15. Leo2005

    Leo2005 Registered Member

    Joined:
    May 31, 2007
    Posts:
    179
    Location:
    Braunschweig (Germany)
    premium or suite. they should score equal. the callsic too except for the ad and spyware detection.
     
  16. Stijnson

    Stijnson Registered Member

    Joined:
    Nov 7, 2007
    Posts:
    533
    Location:
    Paranoia Heaven
    Which NOD32 version was tested? It would be great if they could mention that in their result tables.
     
  17. Valentin_Pletzer

    Valentin_Pletzer Registered Member

    Joined:
    Jun 19, 2007
    Posts:
    11
  18. Stijnson

    Stijnson Registered Member

    Joined:
    Nov 7, 2007
    Posts:
    533
    Location:
    Paranoia Heaven
    I'm sorry, you are right. :thumb:
     
  19. Mele20

    Mele20 Former Poster

    Joined:
    Apr 29, 2002
    Posts:
    2,495
    Location:
    Hilo, Hawaii
    Why is that thread obsolete? It was a figment of everyone's imagination? All those folks who got permanently banned at this site because of their posts in that thread never existed? We all imagined that? Anton's formal statement on Eset's position on Andreas's test is no longer valid? That thread was a piece of very important history in regards to testing AV's. It has great relevance to this thread and to a thread I amazingly missed somehow here from last month regarding F-Prot and DrWeb's withdrawal from IBK's tests. After reading the DrWeb/F-Prot thread yesterday I came to one conclusion. I posted my thoughts in the Avira forum.
    http://forum.avira.com/thread.php?postid=317028#post317028

    Here's the "infamous" thread you want everyone to ignore. I think anyone interested in AV testing who has not read this historical thread should do so and decide for themselves what they think instead of you trying to say it is not important anymore. It is history. We learn from history or we are condemned to repeat it. After reading the DrWeb/FProt thread, I don't think anyone has learned much as I stated in the Avira thread.

    https://www.wilderssecurity.com/showthread.php?t=8912&highlight=andreas marx
     
  20. Bunkhouse Buck

    Bunkhouse Buck Registered Member

    Joined:
    May 29, 2007
    Posts:
    1,286
    Location:
    Las Vegas
    It is better for some people to not remember history. That part that is remembered is often "revisionist" in nature to hide unpleasant facts of reality.
     
  21. bellgamin

    bellgamin Registered Member

    Joined:
    Aug 1, 2002
    Posts:
    8,102
    Location:
    Hawaii
    The thread linked by Mele brought back fond memories. I always liked Vampirefo, Rodzilla, Tinribs. and several other posters in that lovely & contentious thread. I miss them.

    The thread cited by Mele is (it seems to me) a vivid testimony as to why a forum should NOT allow unregistered posters.

    As shown by comparing the tests *back then* against contemporary tests such as the one which is the topic of this thread, all AV programs have come a long long ways since the days of the thread linked by Mele. At least some small portion of this progress should be attributed to the pressures imposed by the mere existence of comparative testing.

    The fact remains that tests are a tool. As with any tool, the user thereof must exercise a modicum of common sense when interpreting the results.
     
  22. Macstorm

    Macstorm Registered Member

    Joined:
    Mar 7, 2005
    Posts:
    2,642
    Location:
    Sneffels volcano
    Very very interesting reading :cautious:

    thanks for the link :thumb:
     
  23. agoretsky

    agoretsky Eset Staff Account

    Joined:
    Apr 4, 2006
    Posts:
    4,033
    Location:
    California
    Hello,

    The message thread discussing the tests performed several years is a piece of history I am not familiar with, however, it was answered and dealt iwth successfully by all of the involved parties.

    Since there seemed to be some question about this, the message thread has been updated here with an explanation at the beginning and the actual agreement reached by all the involved parties.


    Regards,

    Aryeh Goretsky
     
  24. Tweakie

    Tweakie Registered Member

    Joined:
    Feb 28, 2004
    Posts:
    90
    Location:
    E.U.
    Interesting. Although the proposed explanation does not complies with this scan log or - more important - that one, of course.
     
  25. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Scan logs are really quite useless without the samples concerned. How, for instance, would you know that they weren't just scanning a bunch of empty text files?
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.