ESET NOD32 is AV Comparatives winner 2007

Discussion in 'other anti-virus software' started by JasSolo, Dec 17, 2007.

Thread Status:
Not open for further replies.
  1. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    212eta,

    The last time I did a quick statistical calculation of it, ESET/NOD32 isn't out of line objectively speaking. It had the largest variance between the results, but someone had to, and if you allow for 2 standard deviations in the results, they're all basically in the same population.

    The primary issue is that all too many people look to these results as being an accurate reflection of performance differences to the number of figures provided. They're not.

    Blue
     
  2. 212eta

    212eta Registered Member

    Joined:
    Nov 12, 2007
    Posts:
    67


    Don't count on it...
    Unfortunately for them,
    I DO know WHO certain people are and HOW they ''work''.
    -END OF STORY-
     
    Last edited: Dec 20, 2007
  3. 212eta

    212eta Registered Member

    Joined:
    Nov 12, 2007
    Posts:
    67

    My friend,
    Watch out!
    They might shoot you for daring to have such an opinion...
     
  4. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    May be the reason their price is high. But I still know how to get the AV for 3 computers for 1 year for $17.00;)
     
  5. 212eta

    212eta Registered Member

    Joined:
    Nov 12, 2007
    Posts:
    67

    Thanks for the information...Dude...
    It is already in the Title of this thread...
     
  6. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    i cant really moan for price at what i paid :)

    IBK do you know when these results will be posted for all to see?
     
  7. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    afaik, they will be only included in the german printed c't magazine (on monday).
     
  8. Miyagi

    Miyagi Registered Member

    Joined:
    Mar 12, 2005
    Posts:
    426
    Location:
    None
    Exactly!!
    BTW- Where did you obtain your Ph.D. from?
     
  9. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    Dude University:rolleyes:
     
  10. Miyagi

    Miyagi Registered Member

    Joined:
    Mar 12, 2005
    Posts:
    426
    Location:
    None
    LOL :eek: Getting back on topic.
    IBK has written a clear report as to why NOD32 is the winner this year. Rather than meandering to other tests, please stick to the topic.
     
  11. Bunkhouse Buck

    Bunkhouse Buck Registered Member

    Joined:
    May 29, 2007
    Posts:
    1,286
    Location:
    Las Vegas
    I have owned three computer companies and have used computers since 1970. I also know that there are a few shills and hacks on this site whose primary purpose is to promote their own products and put down the competition. They have a hard time denying the efficacy of Nod32, but they do it just the same. That should tell you that they are not motivated by logic and/or performance-at least in the tests I know.

    Bunkhouse Buck tells it like it is.
     
  12. 212eta

    212eta Registered Member

    Joined:
    Nov 12, 2007
    Posts:
    67
    Dear Blue,

    Back to STATISTICS 101,

    1) What do you do when SOMETHING (=>in your case, NOD32)
    has the largest VARIANCE among the other?
    2) How LARGE and how REPRESENTATIVE was your POPULATION?

    In my Ph.D. studies, back in the early 90's,
    We had a saying: "The Art of Deception has two (2) pillars: Statistics & Marketing".
    I can so 'gently' modify the parameters/conditions of the experiment
    -and in turn its results- so that I will offer my customers the result they want.
    Believe me I can do that!
    And if I can do it, everybody can ALSO do it.
    I was in your shoes many years ago...

    No hard feelings :)
     
  13. 212eta

    212eta Registered Member

    Joined:
    Nov 12, 2007
    Posts:
    67
    Welcome to the team!
    You are not the only one who has this view!
    NOD32 is not the problem; their method$ are...
     
  14. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    How representative for the malware challenge set is actually unknown. As for the subjects of my estimation, the sample set is the population (i.e. that AV products themselves)
    Hooray, I have one too. In a germane discipline by the way.
    Which is why I keep preaching that all too many folks read all too fine a detail into these results. They're a good guide of rough rank ordering - which respect to the challenge set - which has somewhat unknown pertinence to the task at hand. The challenge set is roughly approrpriate. However, that turns on how fine a difference one it trying to discern.
    Ummm, no you weren't.

    None taken.

    Blue
     
  15. 212eta

    212eta Registered Member

    Joined:
    Nov 12, 2007
    Posts:
    67
    NOD32 was the 2007 winner for that site, which uses a specific
    pattern of AV testing.
    -So what?
    -this mean that EVERYBODY else has to accept/obey
    what IBK, his loyal friends, and av-comparatives.org say?

    There are many people who don't 'buy' that.
    Sorry, if this is against your INTERE$T$...

    P.S.: I graduated with honors from the DUDEST University...dude...
     
  16. 212eta

    212eta Registered Member

    Joined:
    Nov 12, 2007
    Posts:
    67
    BlueZannetti: "As for the subjects of my estimation, the sample set is the population (i.e. that AV products themselves"

    Come on Blue!

    Of course, your sample population had to do with AV products themselves.
    I didn't expect from you to test...Potatoes...
    For one more time:
    -How LARGE & REPRESENTATIVE was your the Malware Population?
    You answered: UNKNOWN.
    This tells me A LOT!
    -As I told you, I can easily setup any kind of experiment you want and give you
    the results you want.
    Each day, more than 3,000 viruses, trojans, worms, spyware etc. come up.
    We know the weaknesses of certain AV products.
    I can set up an experiment and have the AV product 'X' at the bottom of the results.
    At the same time, I can setup another experiment and bring the AV product 'X' on the top.
    With such a 'flexible' malware population (I really love the ones coming from China & Russia: the red ones), I can play a lot with my AVs and their Test results.

    What I am NOT allowed to play is the Pocket of the inexperienced users.
    This is what I respect more, whereas others don't.

    <No Bitterness>
     
  17. Joliet Jake

    Joliet Jake Registered Member

    Joined:
    Mar 1, 2005
    Posts:
    911
    Location:
    Scotland
    Malware writers are getting really advanced these days, bit of a concern.

    "For real protection, however, in view of the flood of new malware, the way these programs cope with new and completely unfamiliar attacks is more important. And that's where almost all of the products performed significantly worse than just a year ago. The typical recognition rates of their heuristics fell from approximately 40-50 per cent in the last test - at the beginning of 2007 - to a pitiful 20-30 per cent. Only NOD32, with 68 per cent, still delivered a good result, while BitDefender, with 41%, could be called satisfactory."
     
  18. Miyagi

    Miyagi Registered Member

    Joined:
    Mar 12, 2005
    Posts:
    426
    Location:
    None
    I am finding this thread too funny. To my one and only, learn the most basic ethics, then you will understand and accept facts that are stamped already. My response will end now. Thank you.
     
  19. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    212eta, on the serious side, based on your views, then what should the average user use to ensure about all you discuss. I am curious as to what security you prefer. I was end that with a thanks.
     
  20. Kosak

    Kosak Registered Member

    Joined:
    Jul 25, 2007
    Posts:
    711
    Location:
    Slovakia
    No antivirus doesn't supplies health human mind.

    :thumb:
     
  21. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    After removal of a number of off topic and/or inflammatory posts, the thread has been placed back on-line.

    Here's the deal folks, if the thread returns to that previous mode of exchange, it will be closed and not reopened. If you can't participate in a civil discussion on the actual topic of the thread, then simply don't contribute.

    The thread has a nominal topic - stay with it. Civil discourse has certain conventions - adhere to them. If someone disagrees with a point you've made or an opinion you've voiced, it's often simply that - a disagreement in perspective or opinion, not an indication that the other party is mentally challenged, technically inept, or trying to defraud the public.

    Finally, the discussions here should always reflect the technical topic at hand, not the members posting about that technical topic. If you're making comments regarding person X, comments of any type, it's a likely sign your starting to head in the wrong direction.

    Regards,

    Blue
     
  22. lucas1985

    lucas1985 Retired Moderator

    Joined:
    Nov 9, 2006
    Posts:
    4,047
    Location:
    France, May 1968
    May I ask why?
    How does AV-Test.org test proactive detection?
     
  23. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Actually, I was distinguishing between an analysis based on a population and an experimental sample set (which is a subset of a population) used to ideally provide a statistically unbiased model a population. In this case, the AV's are a defined population.
    I use a level of precision in my language that seems to escape you.
    I don't believe any serious tester out there would make claims to the contrary.

    But let's delineate what that really means. At any point in time there is an actual active population of malware. If you were to collect every PC on the planet and have an expert examine it, you could split the collected PC's into actively infected and operationally clean machines. Collect all the unique malware files on those actively infected machines (and if possible any now discarded downloaders) and that is the current active malware population (call this A). If you had done that over time throughout the history of PC's and merged all those results to remove all duplicate entries, that would be the integrated active malware population (this is A + B). Now, there's a large body of samples floating around out there - zoo samples if you will even if available in the open - that exist, are available, but have never infected any machine in the sense of a victim user malware infection (i.e. deliberately innoculated by a tester does not count, this is C). There's also a body of material out there, we'll call it D, that is flagged by one or more AV's since either a part of the package (say a specific routine) or the packer has been identified as associated with an active malware sample, or is a nonfunctional variant of what was once an active piece of malware - these are actually false positive samples.

    Now, tests will examine anything ranging from a subset of A only, a subset of A+B, a subset of A+B+C, to a subset of A+B+C+D. Sometimes the subset chosen is clear from a detailed report, sometimes the subset chosen unclear. In any event, there is virtually no pragmatic way to readily determine whether any of the subsets employed provide an unbiased representation of the even parent populations (A, A+B, or A+B+C). Note that D is not part of this, since they are not functional malware samples, yet they can infiltrate into a sample testbed. Unfortunately, screening by assuming that if X AV's (say X > 4 for example) flag a sample that it is positive, does not guarantee that one will have a subset uncontaminated by members of D.

    Overall, size is irrelevant in the subset unless it approaches the entire population. It is whether there is an intrinsic bias in the membership of the test subset or not that is important. Further, despite the extensive interconnectivity of the Internet, geographic and language based regionalization of malware populations remain. Even ignoring differences in usage patterns, the subset most appropriate for me may not be the same subset that is most applicable to you. That's an additional biasing aspect from both a test and infection standpoint.

    Large scale tests try to minimize both bias and error. To varying degrees they succeed, but as far as I know, nobody has really produced a rigorous analysis of this. Disagreement between the results observed for large scale tests should not be viewed as problematic, but more as an indication of how finely differences in results should be viewed.

    Finally, there is a time fluidity in what constitutes A/B/C/D which tends to appear to be moving quickly relative to sample testbed harvesting and testing. This provides an additional operational bias in the results - a bias that is not whole quantified by fixed time window retrospective testing.

    I don't believe I've ever stated anything to the contrary. In fact, this is painfully obvious, particularly if the testbed is relatively small (and that's a rather loaded term in and of itself).
    To tell you the truth, I have no idea what you're saying here. One interpretation is that you feel my advice to inexperienced users is inappropriate. If so, then I'd suggest you provide some specific evidence of that. If that's not it, then perhaps you should be clearer and more specific in your statements.

    AV's are one approach to deal with malware. They are a powerful approach with significant faults. Most people understand this. That's also true to all the alternatives, from account/policy management, to light virtualization, to roll-back/imaging, to behavioral control, to system lockdown, and so on. If the current implementations of any of these technologies were a magic bullet, we wouldn't be having this discussion, but they're not. Each approach has clear downsides, as well as clear upsides. The balance in these characteristics tend to be situational, usage specific, and user based, as is the most appropriate choice of a solution.

    Blue
     
  24. saffron

    saffron Registered Member

    Joined:
    Nov 4, 2007
    Posts:
    82
    "KAV won Best AV of 2007 in AVC! Let's beatify Clementi!"

    "NOD32 won Best AV of 2007 in AVC! Let's crucify Clementi!"

    :) :) :)

    He sure does!
     
  25. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Thanks, Blue. You never fail to be a breath of fresh air in a stuffed cell.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.