NSS Labs conducted WIN 32 bit OS exploit tests on a number of products between 6/15 - 6/30/2015. Don't think this has been previously posted. A must read for Qihoo 360 fanboys. https://www.nsslabs.com/reports/categories/end-point-protection
Do they a comparison chart or anything to allow easy digestion of the data, or do you have to open each PDF separately for each category?
NSS tested with IE9 ESET lowest 98.7% Avast lowest 94.6% AVG lowest 93.9% Avira lowest 92% Norton lowest 87.2% Trend Micro lowest 50% Panda lowest 36.1% Qihoo lowest 29% MSE lowest 2%
The methodology they used is a bit dated; developed in 2012. Also lowest in this type of test doesn't mean a whole lot since the monitoring was done over a 60 day time period. Mean score is a better measurement in my opinion. Overall, all did well except Panda, Qihoo, and MSE. ESET 100% Avast 99.2% Avira 98.8% AVG 98.4% Trend Micro 98% Norton 97.8% Panda 81.2% Qihoo 80.7% MSE 72.1%
I did not use the average, Do the math: how can one get a 100% average while the lowest protection is 98.7% (Eset). Also the variance of the protection levels should be used to tell something about the relevance of the exploit sample set itself. These type of test (using dated browsers) are really useless without a safe hex reference (what would be the protection by simply using an updated OS and Browser). I simply don't know the level of added protection now.
Test was bogus then and more so now: They tested v4.9 which was from June, 2014; the current version at the time was 6.0 087 084 070 063 They disabled the firewall They tested on Win7 32-bit after 64-bit has been the norm and majority
The scores shown are means not averages. Again read the methodology document to determine details as why NSS did what. The test bed utilized only supports IE 9. Since these were all ad hoc unpaid for tests, I assume the money wasn't available to update the test bed. I also would like to see 64 bit OS tests both 32 and 64 bit software. Given the rarity of 64 bit exploits, I assume the economics just isn't there to cost justify such testing. Finally, lab testors will also encounter the same difficulties as software manufacturers developing test beds that will work with PatchGuard.
Actually, the mean was 99.98% which I infer they rounded up to 100%. On one day, Eset scored 98.7%. For the other 59 days, it scored 100%. 59 x 100 = 5900. 1 x 98.7 = 98.7 ------- 5998.7 / 60 = 99.98%
That's not an excuse for bad methodology. Downloading current versions of free software is, well, umm...free! When you want to talk of empirical testing and evaluation, you don't make a bunch of assumptions and you don't use irrelevant testing. Read the entire test and evaluate the test itself before using it.