Unless I'm misinterpreting the findings NOD32 kicked butt in the latest VBulletin

Discussion in 'NOD32 version 2 Forum' started by ratchet, Oct 17, 2007.

Thread Status:
Not open for further replies.
  1. ratchet

    ratchet Registered Member

    Joined:
    Feb 20, 2006
    Posts:
    1,906
    comparatives. 46 successful, 3 failures, 5 no entry (whatever that means) and every operating system.
    Kaspersky was the only one close with 40 success, 14 failure.

    Don't understand how NOD is supposed to be losing its edge.
     
  2. WSFuser

    WSFuser Registered Member

    Joined:
    Oct 7, 2004
    Posts:
    10,632
    Symantec is also close: 40 Success / 6 Failure / 8 No Entry

    Besides there are other tests such as AV-Comparatives and people can have varying opinions on NOD32. What matters is your opinion.
     
  3. webyourbusiness

    webyourbusiness Registered Member

    Joined:
    Nov 16, 2004
    Posts:
    2,640
    Location:
    Throughout the USA and Canada

    no entry is when they are testing a combination which Eset NOD32 doesn't have a protection system for.

    The current stats are better than any other AV company - as mentioned, the closest is Symantec with 6 less VB100 awards.
     
  4. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Because VB100% certifications are worth a dime a dozen nowadays. Take a closer look at the test methodology and the sample set used, and you'll see why.
     
  5. webyourbusiness

    webyourbusiness Registered Member

    Joined:
    Nov 16, 2004
    Posts:
    2,640
    Location:
    Throughout the USA and Canada

    you think so? Interesting... I have a similar opinion on many other testing situations by OTHER test organizations - but the VB100 awards actually have a GOOD testing strategy compared to the others in my opinion. Everyone's entitled to their own opinion I guess...
     
  6. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Sure, if your definition of a "good testing strategy" using a sample set of only ~700 virii/worms out of ~100K pieces of malware available today (and even that's an optimistic estimate), and not even including trojans, the most prolific threat today, in their test set, because by definition trojans do not replicate and technically aren't "viruses".

    You're right, everyone's entitled to their own opinion.
     
  7. Bunkhouse Buck

    Bunkhouse Buck Registered Member

    Joined:
    May 29, 2007
    Posts:
    1,056
    Location:
    Las Vegas
    The "edge" has been maintained and is not threatened except by a few shills in this forum that are attempting to talk-up/promote their AV software. Kaspersky likes to alter your file system, so they are not contenders unless you like software companies changing your file structure without your sanction.

    And yes, there are many ways to "test" AV software. The problem comes with the simulation of real world conditions which many do not mirror as well as they might. My view is to stick with AV software that has consistently ranked at the top of all AV tests (e.g. NOD32).
     
    Last edited: Oct 18, 2007
  8. BlueZannetti

    BlueZannetti Administrator

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    That's a rather broadly drawn statement. Not everyone who has an opinion or has drawn a conclusion different from your own is a shill. Security strategy is a multidimensional problem, and not everyone views all dimensions with the same weight.
    This is a good strategy, the detail is what constitutes top. I have yet to see a reasonably rigorous statistical analysis of the intrinsic noise in these types of evaluations, but casual inspection of time series trending behavior would appear to place it in the range of a few percentage points, which is a lot larger than most folks appear believe based on some of the heated discussions at this site.

    Blue
     
  9. Bunkhouse Buck

    Bunkhouse Buck Registered Member

    Joined:
    May 29, 2007
    Posts:
    1,056
    Location:
    Las Vegas
    Yes it is a broadly drawn statement (with true premises in my opinion) which has no logical bearing on the truth value of it's conclusion. Is your premise that NOD32 is not a highly ranked (by the rankings provided by the testers) AV program based on various tests that are in the public domain?
     
    Last edited: Oct 18, 2007
  10. webyourbusiness

    webyourbusiness Registered Member

    Joined:
    Nov 16, 2004
    Posts:
    2,640
    Location:
    Throughout the USA and Canada

    as I said - it's my opinion.


    The vb100 award tests "real world" situations I am interested in much better than say, av-comparatives.org.

    The "in-the-wild" test of an AV against known threats that are in currently circulation with up to date signatures and heuristics is FAR more relevant to my real world situation than using my AV solution with updates frozen for 3 months against a massive zoo of 100k+ "threats" - a great number of which are no-longer valid to my real world situation.

    The other major points about VB100 I consider a large improvement over the other tests I have seen are that we need to see 100% IN-THE-WILD DETECTION and ZERO FALSE POSITIVES. Now I understand that may not be as important to you, but I run a couple of businesses, and these factors are important to ME - and MY BUSINESS CUSTOMERS.

    If your top criteria is picking the very most out of a (largely) obsolete pile of junk - ie, the 100k+ test samples when the AV is relying on 3 month old updates - then you will undoubtedly have a different "top AV product" - my problem with that is that I don't care about threat vectors that can't attack my "modern" network. The 100k+ "zoo" used in those tests include many "threats" which have ALMOST ZERO chance of occuring in the network infrastructures of today. As a consequence, I place little or no value on their detection. Another portion of this "zoo" - is massive numbers of droppers - I don't care about them - I care about their PAYLOAD.

    Do I find the odd business who this would be important to - OF COURSE - are they my customers? SOMETIMES - would NOD32 be the TOP CHOICE for all situations? Not necessarily - although even a huge DOS "only" installation (say an old manufacturing plant) - would not have to worry about vast numbers of this 100k+ Zoo.

    On the 100k+ zoo testing. Sure - I want to know that heuristics are working and working very well (as other tests show that they are for nod32 - almost, if not unsurpassed) - but I am PRIMARILY concerned with threats that are likely to end up on my machine by current threat vectors. I care not if a 10 year old DOS threat is detected on my XP machine - do you?

    On trojans - you alluded to the fact that NOD32 doesn't have the very best trojan dropper detection. This is true - and it's been stated here by Eset reps why... and I 100% concur with their strategy.

    The reasoning on not being 100% on trojan droppers is as follows:

    1. there are ever evolving threats
    2. there are ever evolving threat delivery vectors
    3. the delivery packages jump from vector to vector and morph EXTREMELY rapidly


    If you focus on the last option - the package (trojan dropper) themselves - then you will be constantly chasing your tail - you will never stop until you are dizzy and worn out... an intelligent approach - I think NOT!

    However, if you concentrate on the threats themselves - ie, the malware program - the spyware, registry editor, SMTP server, evolving keylogger, dialer itself etc - then yes, it's possible that new variety of dropper will get it's payload to your machine - but once the program is unencrypted and attempting to do it's dirty work, the threats themselves will be detected.

    Malware authors are getting much cleverer, they are not students working in their off-hours, they are becoming dedicated teams of programmers working 24x7 for very large profits - these for-profit threats (criminal activity driven) - such as the pump and dumpers and spammers paying for the use of the storm-worm botnet, are happy to see the bot-masters release new variants as quickly as every 30 minutes - using ever evolving encryption and encoding methods. In this instance, concentrating on the dropper itself is a completely FUTILE method. Getting to the heart of the problem - the SMTP engines and root-kit used to hide the bot is the ONLY SENSIBLE way to defeat the problem.

    Sure, I have other products I sell - and I weigh the pros and cons of each product when assessing the requirements of each customer - these vary by type of customer, network installation, OS mix installed and typical network activities - even budget sometimes over-rides the theoretical best solution when it comes to the final decision. Does this make NOD32 top of the list for EVERY instance? No. Does it make it top, or near top of the list for MOST? Yes.

    Seriously - I know that some people will never purchase NOD32 - does that matter? NOT A BIT! Each to their own - but I am a firm believer that this small company has a great product and an even better roadmap to ensure that this product remains as good as it can and better still, continually improves.

    You pays your money (or not if using a free solution) and you makes your choice(s) - if that's different, fine - go and preach your methodology somewhere neutral, or in the forums of your chosen threat protection system, I'm sure you'll get a great reception. But coming into the support forum of a product you don't believe in, and putting that product down is either:

    a. spiteful
    b. trollish
    c. stupid

    or all of the above. Go enjoy your own "protection" - and get yourself a hobby that doesn't appear to be as foolish or spiteful as posting "damning" threads in the NOD32 support forum. People in this forum have their choice - you have yours - got live and let live!
     
  11. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    In other words, YOUR real-world situation is such that only the ~700 pieces of malware tested by VB matter to you.

    Do you honestly believe this is the scenario your customers face? If yes, then I can safely tell you that you are burying your head in the sand. I suggest you take a careful look at the Wildlist; maybe then you'll discover it's an obsolete list defined by principles more than ten years old. For instance, the virii W95/Dupator and W95/Spaces are present in the August 2007 list, the latest available on the Wildlist website. For your information, they are macro viruses for Microsoft Word 95.

    The Wildlist is a completely obsolete set based on equally obsolete principles that serve no purpose other than inflate the pomp and ego of the vendors who pass their tests and the customers of said vendors who believe the tests are actually worth any bull. It has failed to evolve to keep up with the evolution of the malware industry - while spyware, adware and trojans become the main threat to users, the Wildlist continues to list Word 95 macro virii as an "actively propagating threat". :rolleyes: Unfortunately, you are so obsessed with the Wildlist that anything it doesn't contain is by default, in your words, "an obsolete piece of junk". I can assure you that NOD32 is capable of detecting more than the ~700 virii/worms that VB throws at it, so perhaps it's time you suggest to Eset they need not bother about the other "pieces of junk" other than what's in the Wildlist. Eset should then be able to trim their detection down nicely to only those ~700 virii/worms.

    If you're honestly serious about educating yourself about real-world facts as opposed to just blindly defending your AV and the horribly outdated testing organization that sings its praises, here's a very good paper written by Vesselin Bontchev on why the Wildlist was obsolete since ages ago.

    That's good and dandy, but unfortunately, the test sets and "in the wild" as defined by VB has been obsolete ever since I graduated from grade school.

    Wrong again. Droppers are trojans, but trojans are not necessarily droppers. Again, perhaps it's time you looked beyond your antiquated Wildlist and actually see the malware scene for what it is today.

    And the VB100% utterly fails to provide any evidence that NOD32 excels at this. Actually, passing the VB100% proves nothing, for reasons already mentioned earlier, with the possible exception that Eset has developed some very clever strategies at catching antiquated threats ten years old, and then marketing it to their customers as though it was some sort of achievement that actually had any significance when it comes to defending said customers against real-world threats.

    Apparently then, as far as you're concerned, the support forum is a place where dedicated fans sing their praises to the company without having to pay much regard as to whether they're factually accurate.
     
  12. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    To add to my previous post, here's what Randy Abrams, Eset's own Director of Technical Education and one of the frequent posters on Eset's Threatblog, had to say to Microsoft when the latter announced they'd earned a VB100% award:

    http://windowsonecare.spaces.live.com/blog/cns!C29701F38A601141!4299.entry#comment

    Even Eset itself already knows what winning a VB100% award actually means. Perhaps it's time for some of it's more "dedicated" customers to learn this wisdom too.
     
  13. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    One last point.

    For anyone who honestly believes that VB100% tests with "real-world" situations and earning an award means "100%" in-the-wild detection, all I'd like to do is point out that there are four posts on the first page of this forum alone - the support forum of a product that earned a WHOPPING 46 VB100% awards - that are seeking advice regarding missed malware that said 46-time-VB100%-winner product didn't detect soon enough.
     
  14. anotherjack

    anotherjack Registered Member

    Joined:
    Jun 13, 2003
    Posts:
    224
    Location:
    Louisiana
    @webyourbusiness:

    Nicely put. It constantly eludes me as to why people who have made the decision (for their own reasons) to use another AV solution seem to take great pains to bash another product that someone else has chosen (for THEIR own reasons). Everyone looking for an AV solution has their own criteria for determining what constitutes adequate or superior protection and their own idea as what constitutes an "ideal solution" for their uses. Everyone's data situation is different, so the solution required will usually be unique to that user or dataset.
     
  15. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    May I suggest that perhaps your time would be better spent wondering why some people become so defensive when the high praise about their favorite product isn't what it appears to be at all, rather than on presumed "bashing" that is in reality only a fictional product in the minds of some people due to afore-mentioned defensiveness.
     
  16. anotherjack

    anotherjack Registered Member

    Joined:
    Jun 13, 2003
    Posts:
    224
    Location:
    Louisiana
    I stand by my statement that differing protection requirements are defined by the users or administrators of any given system or network. Someone who has a user base that's constantly browsing the 'net will have a different set of requirements for protection (mainly up-to-date heuristics and a weighting toward more in the wild threats) than someone who manages a more closed set of systems, who may be more concerned with threats to their in house database.

    Your main concern appears to be that the Wildlist is used for testing purposes by testing organization(s). What would your suggestion be to generate a standardized dataset for testing? One that would be universally acceptable and repeatable across multiple testing facilities? Whatever you can come up with, I'm fairly sure that someone will find something wrong with it, as you have done with with the Wildlist, because they see something wrong with how you came up with it. :D

    You can't please everyone (as is often demonstrated here), but the testing methodologies used by VB and other truted organizations have the advantage of the fact that they've been around for a good while and have an established "track record" of impartiality and/or testing methods. It tends to level the playing field a bit and allow for better evaluation of competing products. They're all measured by the same yardstick, as it were.
     
  17. DennisTh

    DennisTh Registered Member

    Joined:
    Oct 18, 2007
    Posts:
    9
    There are other reasons to select an AV solution than just the results they may get on the VB100.

    We are in the process of moving from Trend. Why? I'm a firm believer in regular full scans. I cannot rely on users to do this. With Trend you can schedule them. Did they run? The only way to know is to sit at the PC's console and watch the blue ball in the task bar. Nothing is recorded on the event log nor the server. Since I really don't have the time to manually watch a entire organization's PC/server community every night Trend is out.

    We use Backup Express. Support was good with Veritas. Since Symantec has bought it the support has degraded until it's a total waste of money and time. Is Symantec's AV support immune to this? I don't know and will never experience it first hand. Once BE support tanked Symantec was moved to the do-not-buy list.
     
  18. webyourbusiness

    webyourbusiness Registered Member

    Joined:
    Nov 16, 2004
    Posts:
    2,640
    Location:
    Throughout the USA and Canada

    that's my entire point. I hate to bait trolls, and I have to conclude that I've fallen down the hole again and am arguing with YET ANOTHER TROLL in sol-basher01 or whatever he's calling himself.

    I fail to understand why someone even wastes their time in a support forum for a product bashing the very product the forum is here to support - their motive can only be full of maliciousness - and as I've stated before, and will continue to state - we have MULTIPLE solutions that we are resellers for - we do NOT ONLY sell NOD32. It is my contention that this product fits a LARGE NUMBER of my clients - and that's all.

    Everyone needs to make their own mind up - but if ANYONE wants to bash a particular product, or point out inadequacies or even just COMPARE two or more products side-by-side, the official support forum is NOT the place for that - there are GENERAL AV/THREAT forums which are better suited to bashing posts - as such I'm done responding to the troll.

    Once last point - if VB100 awards were so worthless, I wonder why so many AV solution providers proudly announce the winning of them? How many decry their validity when they FAIL to be awarded one?

    The VirusBulletin.org testing methodologies and the validity of such are NOT up for debate here - this needs to be taken to a different forum - if you don't like them, oh well - take it up with them - not here in the support forum for NOD32 - this is NOT an appropriate place to debate testing methodologies used by ANY INDEPENDENT TEST LABS.

    I would ask any mod to shut down this thread - it's becoming a mud slinging content on the part of the troll - and that's NOT the purpose of this forum at all.
     
  19. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    I see you are more interested in accusations of trolling and mudslinging rather than discussion of the facts and issues I have raised. Oh well.

    It seems to me that the mere pointing out of inaccurate and misleading praises of Eset are a bitter pill that is difficult for some here to swallow. There are many other tests and reviews where NOD32 is ranked favorably - personally I cannot understand the desire to defend to the death this particular hopelessly-outdated testing organization, and take it as a personal insult when the simple fact that their reviews actually don't mean much is pointed out. The simplest explanation to this would be plain, commonplace fanboyism that strives to stifle any and all unfavorable facts from ever coming to light, though I'd rather believe that's not what's happening here.
     
    Last edited: Oct 18, 2007
  20. anotherjack

    anotherjack Registered Member

    Joined:
    Jun 13, 2003
    Posts:
    224
    Location:
    Louisiana
    Well, I recall asking "What would your suggestion be to generate a standardized dataset for testing? One that would be universally acceptable and repeatable across multiple testing facilities?" A valid question, I think, that's still waiting for an answer.

    The issue is that you logged into, as webyourbusiness pointed out, the NOD32 Support Forum. When virtually the first statements you put forth are effectively "VB uses the Wildlist, and it sucks!" without offering any reasonable suggestion of improvement, that would normally be seen as trolling. A WYB pointed out, many, MANY AV vendors tout the VB100 list in their advertising when it's favorable. Why? Because as I pointed out in my previous post, it's a trusted source in the AV community, for whatever reason one cares to claim, and has a track record of impartiality and thoroughness in VB's methodology. Until something better comes out and has proved its mettle, I don't see why there's a problem continuing to use the VB list as a resource in research, using it as another reference in the decision making process. No single source should be the only reason to make a purchasing decision. Pros and cons need to be weighed, and your protection or management parameters need to be given proper due before a reasonable decision can be made and supported.
     
  21. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Unfortunately, I disagree that this is a valid question. My points have been raised; your question is something else entirely, and does little to argue that VB100% test results are a reliable indicator of protection level (I'll assume that's your position).

    It's impossible to create a perfect, flawless methodology, but VB100% is so far out of the park it isn't even funny (10-year-old Word 95 macro virii are "actively propagating" threats?).

    I invited him to check out the testing methodology and sample set used for himself; unfortunately it looks like he didn't take up my offer. Insisting I have to step in and fix an obsolete testing organization is a cop-out on your part. If you have any arguments on why VB100% awards are relevent in terms of protection level, I'd be happy to hear them.

    Something already has. Two somethings, actually. AV-Test and AV-Comparatives are two of the most reputable testing organizations right now. Both organizations use just as, or even more, impartial and thorough methods as VB, take the effort to update and maintain large malware testbeds so that they contain only valid malware, do not disqualify products for stupid technical reasons, offer their samples to vendors for verification of test results, and most importantly, test with sufficient numbers, variety and trends of malware so that the results actually mean something.

    The only advantage VB has is its history. More people know about it and thus conclude that it has to be the authority in virus testing, even if they know little to nothing about how it goes about conducting its tests. Vendors are also more inclined to promote their VB100% certifications, because, in addition to VB's popularity, the tests are easy to pass. Just look at how Microsoft trumpets its VB100% for OneCare. But awareness about what VB100% really is is growing, and even Eset itself has been inclined to reveal the true nature of VB100% awards, as I've mentioned above in an earlier post.

    So I guess you'll agree that Eset's VB100% awards don't really mean much in the present age?
     
  22. Bunkhouse Buck

    Bunkhouse Buck Registered Member

    Joined:
    May 29, 2007
    Posts:
    1,056
    Location:
    Las Vegas
    Perhaps Solcroft's premise is not to to enlighten us with his computer acumen, but rather to often attack, obfuscate, and offer his brand of fallacious claptrap.
     
  23. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    What a coincidence. One could say the very same about you.
     
  24. Bunkhouse Buck

    Bunkhouse Buck Registered Member

    Joined:
    May 29, 2007
    Posts:
    1,056
    Location:
    Las Vegas
    Yes one could say a lot, but some of us ones are waiting for your expertise as to how you would construct a valid and robust AV testing methodology. And, what AV do you use?

    Btw, I agree with your contention about AV-Test and AV-Comparatives- so we don't disagree on everything.
     
  25. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Another cop-out. If you're unable to provide any arguments on why VB100% awards are relevent to protection level, you're always welcome to admit I was right.

    No offence, but what makes you think you'd be privy to that information?
     
Thread Status:
Not open for further replies.