Discussion in 'other anti-virus software' started by FleischmannTV, Nov 12, 2014.
eScan uses it too.
AVG has improved. I guess the new cloud technology does work afterall.
Nope, I don't think Win8.1's Windows Defender is that bad.
AVG: Sept= 96,5 / Oct= 97,8
Avast: Sept= 95,8 / Oct= 95,9
i am little unclear here. is ever smart screen tested for its protection?
I believe it would alteast give a pop-up if a program is relatively new and untrusted? Can it be considered as something like cloud anti-exe?
The sample set is way to small for a month.Considering thousands coming out everyday,in reality we have millions of malware for every month.So a few 100's vs 10000000.Where is the catch?! So avast missed 4.1% right?? Not a big deal if you know mathematics.
Just do the math and its just 24.2 samples missed for avast! Big margin ehhh!! I am sure all AV's companies must be getting the missed samples after the test is done so everyone must have created definations for them by now.
Calm down and look the test results.
Doesn't matter if you like the results or not, it's necessary to point that sample set is the same for your beloved AV and all the rest?
You forgot an excuse for AV-Test.org also........
If you know how many viruses are out there, you can calculate how big a sample must be to get desired level of confidence. You can use this simple calculator: http://www.surveysystem.com/sscalc.htm
So if you do the math and take 10.000.000 samples (your figure) into account, even a missed samples percentage in the lower one figures leads to a missed set in the hundreds of thousands.
Fortinet ahead of many big players...
If sample is too small it can't be used as representative. Confidence level is too small and we can disregard whole test...
I understand, thank you.
You may have a point there, and it might also be valid for one, two, three tests situations but let's face it Avast has been mediocre for quite a while now, something is definitely amiss... Maybe its default configuration is to blame.
But only if it is a random sample. If different malware types are systematically in the sample, a small sample can be more valid than a large one
So this "mathematics" from true indian with xxx.000 samples a day vs. a small amount in testset is not valid. Cause the samples are often very similar, belong to families and signatures / or proactive components catch more than one. Beside that: prevalence data play a huge role in many professional (sig!) tests...
Yes I agree. A lot of parameters could be used when analyzing performance (like virus distribution paths, computer usage scenarios...) but those are hard to include in tests. Since we don't have some central database of all malware, it's also hard to get "random" sample.
Yes, finally a test where AVG did not finish in the "also ran" category.
Congrats to Qihoo 360 but the test uses V5. What's up with that. The U.S. version is still at 4. I wonder if AV-Comparatives will ever test Total Security?
Trend is looking really good. I hope avast with NG will start showing better results.
Love Fortinet...just too heavy for older systems...
fortinet is very good especially for free except its super heavy imo
maybe we will see this again next time if so ill be happy for avg. though im not sure (lol)
Thank you. I knew there was some statistical rule for figuring how large a sample size has to be for a sample size to be meaningful.
Regarding the test, seems Panda had a bad month.
Avira results are also very disappointing
... but it's still better than Avast and Lavasoft.
Not so good performance by Panda too. I was used to see 100%
Anything below 100% is boring. Above 100% is the exciting stuff
Separate names with a comma.