Discussion in 'other anti-virus software' started by InfinityAz, May 23, 2007.
The fact that the samples were available to the vendors before the test was conducted in principle undermines the validity of the test. Whether this actually skewed the results, we can't be sure. As was mentioned it probably wasn't enough time to add all the samples, but it only takes some of the vendors adding some of the samples to their signatures to skew the results. Granted all of them supposedly had equal access to the samples, but unless all the vendors added relatively the same amount of the sample malware to their database, then the results would be skewed. Am I missing something here?
Regarding discussions on whether any AV company can add 5000+ malware within a month, Mr.Marx has once again very kindly provided some comments :
Lots of interesting comments from Mr.Marx above. I must say he has been very informative through all this.
It will be interesting to see the reliability tests which should be released in the near future. Also, from these statements it looks like Mr.Clementi is correct - scanners with good generics are probably detecting lots of malware.
of course its correct, but it is not exactly what i said/meant. but i will not go deeper into this.
I've done the testing and stand by my results.
I don't know IBK, you have been quite vague throughout this thread. Every time I see your post I feel like you are thinking something (i.e. you have your reservations/doubts).
Either way, if you're thinking what I think you're thinking, then I'll have to refer you to your own post earlier:
No offense intended towards you, so please don't take it offensively. And with that its best to drop the issue.
To me, AV-test remains one of the most reliable testing organizations out there (if not THE most reliable), and Andreas Marx's continued clarification of various doubts relating to the test only makes me trust AV-test all the more.
i do not criticize the test of andreas marx. after all, the results are quite similar to those of another tester, even if i would have expected more discrepancies based on the used test-set.
do not know what you think i am thinking, but as what i think has by far not even be mentioned in the thread, i am quite sure you do not know what i think .
like i always stated, av-test is trustworthy and i am not doubting the results at all.
I appreciate the time you have spent on this thread.
I am also looking at the results from this website (May 10) which are more in line with those at AV-Comparatives (except for Symantec). http://www.virus.gr/fullxml/default.asp?id=110&mnu=110
OK then. Considering the way you mentioned generics in one of your previous posts, something came to my head about what you might be possibly thinking. Most likely I am wrong in what I was thinking. Again, sorry if I directly attacked you.
But do explain this to me:
Isn't the only other really trustworthy test out there AV-comparatives? So, wouldn't this "another tester" be you (i.e. Andreas Clementi), or is there a third one newly added into the mix?
not at all. that results are totally different.
sorry to say, but virus.gr is completly unreliable source.
yes, i just do not wanted to promote myself .
No, virus.gr has its own set of discrepancies. VirusP has put a respectable effort to clean up his sample set since the last virus.gr test, but I'm pretty sure there are still quite a bit of corrupted/harmless files still in his test.
In any case, the virus.gr test can be "interesting" for some users, but right now one cannot call it reliable, not just yet.
Actually, you (all) are talking about painkillers to a disease, that actually ever exists, if you just pick the right way to go! Even without CLAMAV you can live it, if you just want as I do.
Actually I do believe it. I think it is you who cant fathom your precious pink diamond not being the rave of the AV world. Well, you both better wake up or it will be the zirconium mine for you. Why is it if your AV isnt one of the best, all tests results are crap. Hmmm. I look at CSJ standing by his product and reviews may be mixed, but you know, if it wasnt Avira, it would be Dr. Web. The time has come for Eset to come off their self proclaimed mountain and deal with directly working with, and listening to, their customers in order to make a better product. They can do this, but it will take a radical change of perception. Bah-Hum bug.
I am at a loss as to why the point I and others mentioned isn't being addressed more fully. Unless I'm missing something, giving the vendors the test samples before the actual test takes place renders the overall test methodologically flawed by introducing the potential of skewed results. I can't help but wonder why he didn't give them the sample malware after the test had been completed. That way he can assure the accuracy and reliability of the test, at least in this respect, and still contribute to the efforts of AV vendors.
Giving vendors test samples is like giving a bank robber the keys to the bank. It is assine. Show me someone who tests without giving the answers away and I agree, you may have some accuracy.
That is an insult to IBK. he does not do that knid of thing.
I didnt say he did, but anyone whom does to me, the test is worthless. Wouldnt you rather see real world testing.
O Okay. I misinterpreted your post.
I guess the only comment that I'd make is that the result is largely consistent with the other major test that is frequently run (www.av-comparatives.org). This suggests that most of the final results are not skewed. As I mentioned above, you can place some bounding limits on the expected results between the tests by looking a sequential on-demand results from www.av-comparatives.org. A complete calculation for all common products using the av-comparatives August 2006 and Feb 2007 results is given below. A caveat however - don't overinterpret the results given below, they are crude estimates.
The bounding estimation is only of predictive use when it is "relatively" small. From a calculation perspective, a large range between the Estimated Min and Estimated Max values indicate instability in the detection rate over time. That could be due to a program undergoing either significant improvement or a major drop in performance with all these calculations dominated by what has transpired for the Trojans category. A quick inspection of the raw results at www.av-comparatives.org suggests that, except for McAfee and Norman, the large range values noted for Avast!/AVG/BitDefender/Dr. Web/F-Prot are due to improvements in performance. At the end of the day it really doesn't matter to a customer how this improvement occurred - but the estimates shown below were obtained prior to and independent of the www.AV-Test.org evaluation.
Finally, let's keep a grip on perspective. Unless one is a risk prone user, all these products possess sufficient performance.
I'm sorry but I did not understand how you computed these estimates. I don't understand their meaning, either...
The calculations were described in an earlier post in this thread, here. It's an attempt to objectively answer the question of whether, aside from testbed sampling timeframe, the results of www.av-comparatives.org and the latest www.AV-Test.org AV comparison are the same.
Within reasonable and objective limits, I feel that the answer is yes. To obtain a more quantitative answer, a fairly extensive effort would be required and it's not really worth it. I realize that lot's of people read significance into razor thin differences in detection rates, and the numbers I estimated show a much larger uncertainty, but that's what they are.
How does Avira, Norman, AVK, kaspersky, Symantec and F-Secure get an estimated max lower than both the av-test and av-comparative score?
As was explained several times already, the samples are being provided on a daily basis and are available irrespective of a test being conducted or not. Obviously those companies that avail themselves of the samples have an advantage but that does not skew the test. It just shows that some companies put more effort into collecting and integrating samples than others.
Well, I for one am not sure what Firecat is thinking IBK is thinking but I am thinking for those of us less telephathically developed, a bit more clarity would be a plus
I'm sure the decision wasn't arbitrary and was carefully considered. Although I'd like to know from IBK what they are if he's willing to say that is.
Separate names with a comma.