Latest AV-comparatives.org test results ( was latest test resultsfor Nod32)

Discussion in 'other anti-virus software' started by Edwin024, Nov 30, 2004.

Thread Status:
Not open for further replies.
  1. Starrob

    Starrob Registered Member

    Joined:
    Apr 14, 2004
    Posts:
    493
    Re: Latest test results for Nod32

    KAV, NOD32, and PG seems to me a very powerful combination. PG would block most trojans, rootkits, and keyloggers and KAV and NOD32 would perform mop up duty. That is not the combination on my computer but it would be good enough for me.

    Only other thing that might be good is something to monitor the registry.

    Starrob

     
  2. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    Re: Latest test results for Nod32

    8 samples doesn't prove very much, there were also 8 macro viruses, where NOD scored 0. When I scanned my 236 macro viruses, NOD scored without signatures, but Advanced Heuristics on, 209, it was 88.6 %.

    Best regards,
    Firefighter!
     
  3. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Re: Latest test results for Nod32

    but the difference is that those 8 are officially on the Wildlist while the 8 macros not. There are only 8 because only 8 new ITW samples appeared. I think it is clear enough.
     
  4. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    Re: Latest test results for Nod32

    If I made a scan with NOD by using Advanced Heuristics only, without signatures at all, what's the difference to this heuristics test? Aren't those all files new to this program in this option used?

    Best regards,
    Firefighter!
     
  5. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Re: Latest test results for Nod32

    I think I or you miss' the point. Or at least I am now to tired to understand how you mean it. I do not understand why you want to use AH only for example. I mean, what is the question?
    _Maybe_ I understood now what you mean: dunno what the difference would be, I did not tried to use only the AH. Probably no difference yeah. But the other programs does not have this option, so what is the purpose of your questoipn?
     
    Last edited: Nov 30, 2004
  6. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    Re: Latest test results for Nod32

    Just read my post 52 again! 8 isn't so much to make further conclusions, detected or not, as I showed in my own scan without signatures against 236 macros.

    Best regards,
    Firefighter!
     
  7. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Re: Latest test results for Nod32

    I think now I probably understood what you mean. But I still remain to the opinion that the detection of the 8 itw samples is more important than the detection of 8 macro samples that are not itw. The 8 samples ITW were really spreading out there and those that used e.g. NOD32 would even if they did not update their AV since long time or in a day zero situation be safe against them. The 8 macros are so far not spreading and most probably only in the zoo, so if they are not detected proactivly who cares. Detection failures in a retrospective test are not up for discussion. It does only show the pure proactive detection capability of the scanners.
     
  8. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    Re: Latest test results for Nod32

    I totally agree that. But still the real detection rate against forthcoming ItW stuff can be between 30 - 80 %, anybody can't make conclusions from 8 samples. Besides, also TrojanDownloaders and TrojanDroppers are very, very ItW nowadays, but they don't fill your PC up like viruses do. They aren't in ItW list at all.

    Best regards,
    Firefighter!
     
    Last edited: Dec 1, 2004
  9. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Re: Latest test results for Nod32

    Yes I agree with you. I think I also wrote this (or something similar) in the report (or maybe in the PDF with the FAQ's).
     
  10. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    Re: Latest test results for Nod32

    Yes, I have seen that too.

    Btw, NOD is quite good performer against (zoo) trojan like malware too nowadays, who believed that about a year ago?

    Best regards,
    Firefighter!
     
    Last edited: Dec 1, 2004
  11. Benvan45

    Benvan45 Registered Member

    Joined:
    Jul 27, 2004
    Posts:
    556
    Re: Latest test results for Nod32

    Could someone explain to me why this topic is not moved to the proper NOD section?
    Some time ago I dropped a remark about Panda being a great AV scanner in the NOD section and was asked to place that kind of remarks in the proper section.

    I find this very irritating......NOD in front, NOD in the back, NOD sideways......
    This is a great forum, but once again......it just smells like NOD32 and nothing else and I find that a great pitty.

    o_O Putin
     
  12. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Re: Latest test results for Nod32

    I was surprized about the results too. This is something that I like when testing: I never know how the results will be before the test is not completly done :)

    BTW: the forum on the website is currently a bit quiet. I would like to see more people in it, If anyone has not registered yet, please do it and welcome! http://www.av-comparatives.org/forum


    Edit: yes, the title is a bit to much NOD32 oriented. Next time when you talk about test results of my site plz use a more "neutral" title ;-) I was not fast enough to post here in the forum, otherwise I would had put a topic here in this subforum where the talk would be only about the test like in the past. Well next time I will be faster :p
     
  13. TAG97

    TAG97 Registered Member

    Joined:
    Feb 10, 2002
    Posts:
    616
    Location:
    Connecticut USA
    Re: Latest test results for Nod32

    I agree 100%
    Here.s another NOD32 topic in "Other AntiVirus Forum" https://www.wilderssecurity.com/showthread.php?t=56029
     
  14. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Re: Latest test results for Nod32

    @ putin & TAG97,

    This thread was moved early on to the Other Anti-Virus Software forum (see post #9) since the content was very likely to move beyond NOD32 due to the subject matter of the test and it's neither a NOD32 specific topic or support issue - despite the title of the thread.

    It was not moved capriciously, but rather to encourage general discussion since the subject of the test is one of general interest in the AV community. While much of the comment has focused on NOD32 thus far, it is not the only AV that, in my opinion, performed extremely well in this test. Also, while we tend to focus on the final tally of malware flagged, there's a lot of information in that test that really needed to be digested to comprehend the full picture. I really wouldn't want that on-going discussion artificially restricted to NOD32 due to the original posting location of the thread.

    This IS the proper venue to discuss, disect, comment and speculate on this most recent effort by the folks at www.av-comparatives.org.

    Finally, while much of the discussion has centered on the AV subjects of this test, I believe that we would all be remiss not to send a message of thanks to IBK and everyone else involved in this effort for an absolutely first rate job! Well done, once again!

    Blue
     
  15. mercurie

    mercurie A Friendly Creature

    Joined:
    Nov 28, 2003
    Posts:
    2,448
    Location:
    Sky over the Wilders Forest
    Re: Latest test results for Nod32

    Fellow Creatures,
    I seldom visit the NOD Forum because I do not use NOD. However, that does not mean I am totally uninterested in NOD. A discussion which includes NOD in the general context of comparisons with other AV products, even if it results in a turn to focus on NOD I think is appropriate.

    I find these forums to be well managed. Just my 2 cents. ;)

    Note: I will make no further comment on this as I think we would stray off topic. :)
     
  16. Access Denied

    Access Denied Registered Member

    Joined:
    Aug 8, 2003
    Posts:
    927
    Location:
    Computer Chair
    Re: Latest test results for Nod32

    I for one would like to see mks_vir on that list with its heuristics. I use this and and find it to be as close to NOD32 as possible, which I also use.

    EDITED** I will correct my wording and say the heuristics are on a level with or above NOD32. I am not saying NOD is not the best, just that they now have company up there or possibly have been dethroned. No testin head to head has been done. I just notice that it nails everything NOD32 does on my pc's. :eek:
     
    Last edited: Nov 30, 2004
  17. Blackspear

    Blackspear Global Moderator

    Joined:
    Dec 2, 2002
    Posts:
    15,115
    Location:
    Gold Coast, Queensland, Australia
    Re: Latest test results for Nod32

    As Blue said, as well I will add that the very first post was tagged onto the end of another thread, it was totally off topic, and as such I split it away into it's own thread. In doing so I gave it the title that you now see, given that at that point in time it was in the Nod32 forum and had this as it's subject:
    The Nod32 support forum is just for that, "Support issues relating to Nod32", thus if a thread wanders off into a comparison of other AV's then rightly so it will be shifted to a more appropriate forum such as this, where further discussion can be explored to everyone’s hearts content...

    Hope this helps...

    Cheers :D
     
  18. no13

    no13 Retired Major Resident Nutcase

    Joined:
    Sep 28, 2004
    Posts:
    1,327
    Location:
    Wouldn't YOU like to know?
    Re: Latest test results for Nod32

    @IBK...
    wait around man.
    People are already into full fledged discussions of your research, and I don't think net users (the lazy b*****ds we are) will move from already started threads onto a new forum.
    Patience, o great master!
    BTW: I did sign up. I'll post soon. Most users ever online was 13...Enjoy~!!!
     
  19. dvk01

    dvk01 Global Moderator

    Joined:
    Oct 9, 2003
    Posts:
    3,131
    Location:
    Loughton, Essex. UK
    OK I''ve amended the topic title to make it clearer what it is about
     
  20. Mele20

    Mele20 Former Poster

    Joined:
    Apr 29, 2002
    Posts:
    2,495
    Location:
    Hilo, Hawaii
    Re: Latest test results for Nod32


    I have Acrobat Reader 5.0.5 and will never install 6.0 not even so I can read this test. You should make it available by some other method. There are many users who have chosen to remain with Acrobat Reader 5.0.5.

    Also may I ask why do I have to use IE in order to see the results on line? I hate using IE. But Firefox and Mozilla display your tests with text on top of text so that I cannot read anything.

    I thought the results for Bit Defender were quite interesting. I just downloaded the free version a couple of days ago.

    As for the OT date display, if Americans were to write the date with the day first and then the month and then the year think how much more trouble it would be to speak that. Instead of saying "March fourth 2004" one would have to speak it " The fourth of March 2004" to be grammatically correct. That is a mouthful and is unnecessarily awkward. The American way makes more sense. ;)
    I never know what the date is when I search on dslreports because that site uses the European method and I cannot get that straight in my head. Consequently, I don't pay attention to dates there.

    I do very much appreciate all the work you do on these tests. I have a lot of respect for your tests and eagerly await the next set. (I don't want it to sound like I don't since I griped about a couple of things). :)
     
  21. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Ok...
    on user request, PDF's were adapted in order that they can be read also with Adobe Acrobat Reader 5.x

    About Firefox, read now the PDF with the FAQ's, it contains the solution ;-)
     
  22. Benvan45

    Benvan45 Registered Member

    Joined:
    Jul 27, 2004
    Posts:
    556
    Re: Latest test results for Nod32

    Does not really help that much. I appreciate your comments, but I still feel that NOD is being lifted to heaven far too much........No matter what topic is being started, NOD is always put on the top shelve by most of the Mods and Higher Ranked people and that's annoying.
    I've stated this before and I was even asked why I was even bothering visiting this forum by a high ranked Nod mod!!!!! Just beacause I said that I liked Panda Platinum because of it's easy configuration, compared to Nod.
    I don't think this is right, everybody has the right to visit and read and get wiser about security, but all I ask........don't push NOD 24 hours around the clock in all topics started.

    Cheers also!!!! ;) Putin
     
  23. Tweakie

    Tweakie Registered Member

    Joined:
    Feb 28, 2004
    Posts:
    90
    Location:
    E.U.
    Hi IBK,

    I have several questions about the testing procedure. After
    reading the testing procedure, I concluded that av-comparatives.org
    considers that a sample is new/unknown to a scanner according
    to the name given by the AV to the sample :
    - If it is not recognized, it is new/unknown ;
    - If it is recognized and that the name contains, e.g.
    the strings "heuristics", "NewHeur", "Bloodhound", ".gen",
    "behaves like", "generic" etc. it is also considered as
    new/unknown.

    A sample is included in the test set only if :
    - It is new/unknown (according to the above definition) to every
    scanner.
    - It has been sent to av-comparatives.org after the latest
    update of the scanners (6th of August for this test).

    Last but not least, samples are sent to av-comparatives.org
    by users/collectors/on-line scanner operators and various AV
    companies, that anyway do share some of their samples. It is
    specified somewhere on the site that the proportion of samples
    sent by the AV companies is increasing.

    Now, the questions :

    1/ Could the test results be affected by the way the AV companies
    manage their backlog (proactive tests) ?

    More precisely, some companies may give a lower priority to samples
    that are already detected by heuristics with respect to fully
    undetected samples. Your tests might give a small advantage to
    such companies with respect to the companies that do not practice
    this kind of distinction.

    Moreover, the heuristic/generic scanning capabilites might be improved
    independantly from the signatures. Typically, I'm not completely sure
    that all the AVs that do detect a sample as, let say, "Gaobot.gen" do
    really want to include specific detection capabilities for this sample.
    And I'm not even sure that this sample were not used for designing the
    "gaobot.gen" generic signature.

    Actually, the influence of the backlog management policy on the test
    results could be tested. Since you provide all the samples to the AV
    companies after the tests, you can consider that now, they are known
    to them. You can therefore re-scan the same dataset with up-to-date
    versions of the scanners and see if there is a statistically
    significant difference between the identification results (i.e.
    detection using signatures) for samples that were previously detected
    heuristically and samples that were not. If the proportion of
    identified samples is higher in the subset of samples that were not
    detected heuristically, then, this could mean that the test was
    slightly biased in favor of the considered scanner.

    2/ Would it be easy to pervert your dataset ?

    Some worms are adding random data at the end of the file they create
    in order to fool simple detection engines based on a MD5 hash of the
    file. A lot of trojan horses can be edited for specifying the IP
    of the server of the attacker, the password of the backdoor server,
    etc (the same may apply to IRC server/channel name/password for
    som Agobots/Spybots, the URL that is pointed to by a webdownloader...).
    Many dialers exist in several flavors, the main changes concerning
    the phone number that are called depending on the victim's country.
    Usually, When an AV scanner detects one of these samples, it will
    detect all of them. How can you be sure that there are not a lot of
    such very similar samples in your database (that could even be added
    intentionnaly, for example through the use of on-line scanners) ? In
    my opinion this would screw any subsequent statistical measures.

    Do you verify that all the test samples are unique by (non-generic)
    name for at least one scanner* (excepted for _known_ true polymorphic/
    metamorphic ones - there are not that much). This applies also to
    proactive testing.

    * Preferably one that features an unpacker.


    3/ Could the fact that AV vendors provide some of the samples influence
    the test results ?

    AV vendors could choose to provide you only with samples that were
    detected heuristically before they have added specific signature.
    This would guarantee a 100% proactive efficiency for their scanner amongst the samples
    they submit. Depending on the distribution of your malware sources,
    this could greatly influence the results.

    Once again, this can be tested to some extent: does an AV vendor
    performs significantly better in proactive tests on the sample that
    he submitted himself than on the samples coming from other sources.
    If the answer is yes, you may consider that this vendor intentionally
    introduced a bias in the test dataset. I consider this as a serious
    issue.

    These are just questions and remarks. This is not for criticizing your
    tests, I do appreciate it and I know that it requires a lot of work.

    --
    Tweakie
     
  24. Blackspear

    Blackspear Global Moderator

    Joined:
    Dec 2, 2002
    Posts:
    15,115
    Location:
    Gold Coast, Queensland, Australia
    Re: Latest test results for Nod32

    Hey Putin, you are on a Website that hosts the Official Nod32 Forum, thus you would expect to see it praised, as well as the many problems that people come across, wouldn’t you think ;) :D

    You are more than welcome to discuss Panda and any other Antivirus product out there, it’s just the Nod32 support forum is just that, for Nod32 support and related issues, all other antivirus discussions are posted in this particular forum :D

    All the best…

    Cheers :D
     
  25. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Hello Tweakie,

    no, it is not so easy as it maybe looks like. There is no way to influence the test-set/test-results by the AV companies, because I also use other methods to determine which samples are new; but I do not tell everything ;)
    It can also happen that a file is detected as "apparently" exact detection but it is new. I know how to know if it is new or not, but I do not tell everything in order to avoid influences. Anyway it would be probably still impossible for single companies to influence the test on purpose, as various methods are used were they can not have influence. So I can assure you that all the used samples were really unknown/new and that they appeared within the last 3 months.
    I understand your fears, but I can really assure you that they can not influence my tests. I give very much attention to keep the tests fair and indepedent.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.