Discussion in 'ESET NOD32 Antivirus' started by JAB, Apr 15, 2008.
Interesting? No it's not even that. Behind Clam, eTrust and Quickheal? It would have made a good April Fools day post.
That's part of what makes it interesting. It makes me wonder why ESET would perform so poorly in those tests, when my general opinion of ESET is quite high.
that's done it for me! 2 years ago i went with nod because of it's lightness and reputation. i hate v3 and v2.7 has started throwing up vista compatible warnings since sp1, these forums were much more positive even a year ago. i have cut my losses and gone for antivir for a while..
i don't know how reputable those tests are but the results are pretty shocking and dont fill me with confidence at all.
1. Honeypots are prone to catching corrupted files, we've seen this many times in the past. ESET does not detect corrupted files. A serious tester provides the samples used in his test to AV vendors for verification. I'm not aware of ESET being contacted by that tester.
2. A test set comprising of 1030 files is too small IMHO, compared to about 800-900,000 samples used by other testers and given the fact that dozens of thousands of new threats emerge on a daily basis.
1. That sounds like a reasonable hypothesis.
2. If one is measuring detection against emerging threats, I see no reason why a sample of 1,000 would not be statistically relevant. The key is whether the sample is approximately random. If SRI has a random sample, I think basic statistics suggests that the uncertainty in their measurement is only about 1.4 points. If SRI has a reasonably random sample, one can be reasonably sure that if ESET were run on the "dozens of thousands" of emerging threats, ESET's success would be between 69% and 75%, which is not enough of a range to change its relative ranking more than about two positions to 24th.
It is important to note that SRI is not a small, fly-by-night organization/single individual with a hobby, nor do they purport to be performing a comprehensive test of anti-virus products. They submit what their honeypot catches to virustotal and tabulate the results. This means, I think, that ESET receives a copy of any missed files, although ESET wouldn't know they had originated with SRI.
One of the reasons I posted the link to the article was in the hope that ESET would reach out to SRI to discuss methodology and obtain samples for analysis.
Why the heck isn't v3 included in those tests?
Presumably because virustotal isn't using v3. That doesn't diminish the importance of your observation. Why doesn't virustotal use v3?
It doesn't matter whether v2 or v3 is used for testing. The question is how many of those files were actually functional as files collected by honeypots are prone to corruption. Also NOD32 users are protected against threats otherwise not detected at Virustotal or other on-demand scanners as the web/email protection modules utilize much more sensitive heuristics as well as blocking of suspicious websites.
I see again avira, bitdefender, and avg at the top...
Yeah, I really like Avira, but their central management tool is unusable. And, their false positive rate is a bit high.
Is this just a result of 'faulty', or disagreeable, testing practices? I know that some AVs, such as Dr Web, dislike the way tests are done. DRW gets low results because of this. Is this the case with NOD?
- The tester uses Virustotal. He has no control over the settings of the AVs/AMs.
- AVG better than Webwasher and Antivir, LOL
Exactly! Like I always say, if it goes against preconceived, unsubstantiated notions, it's nonsense!
You're not saying the only tests results that are valid are the ones the guys complaining about this one agree with?!?
The question is, who gets to decide which are preconceived, unsubstantiated notions, and which are facts that are common knowledge?
That task should not be left for the ignorant, I say...
Interesting, wonder why they use v2....I would say fair game anyways since the detection would be the same as v3 am i'm correct? Congratz to Bitdefender no hard feeling every vendor has it's process of doing things and no vendor is perfect. Nod32 still rocks!
You must be correct as far as detection goes as Marcos says it doesn't matter whether they used V2 or V3.
I think he was saying it doesn't matter what version they tested with because the methodology was flawed in the first place.
How can the last product have missed 1031 binaries, if there were only 1030?
If you read in-between-the-lines "The results do not take into consideration the false positive rate of a given tool, and thus a tool that declares everything to be infected would appear to have the highest true positive percentage rate."
On the other hand, NOD32 detected 72 per cent of saples which is very close to the detection levels usually achieved in proactive/retrospective tests.
IMHO, nothing to worry about.
That's an interesting point. But, I find it difficult to reconcile with the AV Comparatives reports. Looking at the SRI numbers vs AV Comparatives proactive/retrospective report:
Low false positives:
* F-Secure: 87% vs. 14%
* Symantec: 78% vs. 35%
* ESET: 72% vs. 71%
Medium false positives:
* AVG: 95% vs. 25%
* Kaspersky: 93% vs. 40%
* Avast: 88% vs. 37%
* Microsoft: 88% vs. 35%
* Norman: 88% vs. 33%
* McAfee: 74% vs. 34%
* Fortinet: 74% vs. 3%
High false positives:
* Avira: 95% vs. 81%
* Bitdefender: 95% vs. 44%
* Dr. Web: 83% vs. 39%
* F-Prot: 83% vs. 33%
Why is that that everyone but ESET has a significantly easier time detecting the SRI samples than the AV Comparative proactive/retrospective samples? It looks like the similarity in detection rates for ESET is a coincidence.
I'm not hammering on ESET here. I'd just like to better understand this difference. I think the bulk of the evidence suggests that ESET is the best product on the market.
Well, i cant speak on behalf of others but it seems that something in our software causes rock solid detection rates
From the statistical point of view, the test set was limited, there are no data known about its composition etc...
I think the concern is rock solid at 72%.
I've already dealt with the statistical issue. The small test set is irrelevant to the conclusions, providing it is a random sample. The power of statistics is the ability to draw high confidence conclusions from limited data.
Nonetheless, I agree that knowing much more about the composition of the test set would be very useful. I'm hoping ESET will reach out to SRI, whether the results get shared with anyone else or not.
Separate names with a comma.