Thanks. Nice to see Emsisoft tested and performing well. AVG continues to shine. Microsoft dropping back a bit.
Interesting they have fleshed out the performance section results to show % change in performance in a number of areas in these latest results.
The performance section is much better now. It was interesting to see how much ESET slows browsing(I ran some tests to confirm). I think it is strange that AV-Comparitives does not include browsing impact in their performance tests.
Hmmmm. 20% slowdown as an industry standard for launching popular websites. I wonder how much that value is reduced by the use of EasyList and EasyPrivacy blocking, some tracking blocker (Ghostery, PrivacyBadger, etc.) and a Flash/HTML5 blocker. I whitelist some sites and it takes significantly longer to render a page while monitoring endless connections to dozens and dozens of blockable elements and trackers and too many video ads. Anyhow, don't forget to disable your AV when browsing those "popular" websites. Wink wink nudge nudge.
Do you think the average user actually notices that their favorite pop culture page loads in 3.84 seconds instead of 3.20? Performance data is for OCD geeks, anyway.
I hate surfing without an ad blocker...the ads are the biggest slowdown not to mention annoying. At the end of the day if I'm comparing two security products that are the same price and offer similar potection but one slows browsing, I'm know which one I'm going to choose.
Test is nice and all but their higher weighing of detection to protection is wrong imo. This system protected you from all the thieves but didn't know the name to some of them, so instead we are going to score this other system which didn't protect you against some of the thieves but named more of them, higher.
as far as i can tell, their categorical ratings are based on relative numbers - e.g. if the average is 97% and something hits 98% or 99%, it's above the average and is awarded a higher score. conversely, if the average is 97% and a product hits 91.8 or 86.4%, it's awarded a lower score. in the case of MSE (which is what my link is) you can see they obviously didn't "weigh" wildlist/high-prevalence stuff more than newer (protection category) threats. protection testing methodology is explained here
Comodo results indicate otherwise or they have some really weird scoring on the detection part. P.S. I know the methodology, but I don't think they explain how they score.
I don't understand the reasoning behind testing the Chinese version of Qihoo 360 Antivirus v5 and not 360 Total Security. Looks like its using Qihoo's own engines plus Bitdefender.
They look like they're weighted the same. Fewer prevention cases vs wildlist cases. MSE was well below avg on prevention but 99.6 & 99.7 on wildlist, but scored 3. Comodo was way below avg on wildlist but scored 100% on prevention and got a 4/6. Probably something like 3 pts each. At any rate, I contacted them to ask because there's no point in speculating.
Here's a link to the Protection category methodology: https://www.av-test.org/en/test-procedures/test-modules/protection/ As far as the awarding of points: Home-user products must achieve at least 10 of the 18 points available and at least 1 point in each category in order to earn an "AV-TEST CERTIFIED" seal of approval. Corporate solutions must achieve 10 of the 18 points available and at least 1 point in each category in order to receive the "AV-TEST APPROVED" seal of approval. Each category is awarded a maximum of 6 points; that's what the "circles" represent on the test report. How the points are given in each category is a mystery. My guess is it is based on the percentages awarded in the test sub-categories. For the protection category, there are two. That would mean a maximum score of 3 points each. I suspect there is a minimum percentage threshold, lets say 70%. Anything below that gets 0 points. Above 70%, the points are award in increments of 1 for every 10% increase above 70%.
As above just ignore the point system and look at the actual % to compare effectiveness between products.
To me it looks like there are engineers at work, that do not know much about real world. They tested the chinese version of Qihoo 360 Antivirus v5, a version for chinese market, and linked to chinese web site...
The problem with tests like this, is that the performance impact of an antivirus will vary from one computer to the next. So, on your own computer/s ot's possible that performance ranking will change.
I received a response from AV-test this morning. The scoring for protection is 3 points per protection subcategory (prevention + detection = 6). There's no weighting (e.g. wildlist/zoo detection isn't more important to scoring than prevention/zero-day/early-lifecycle malware). yep, i agree. that's the best way to look at these tests (and others). most don't seem to want to, for whatever reason. product performance in lab testing is supposed to be considered relative to other products in the same test. it's not supposed to provide you with an idea of how it will perform on your very specific machine. it's just intended to provide an idea of how well products perform relative to each other on the same test hardware. if av-test is anything like AV-C, the vendor tells them which product version/release to test, and they test it. that would suggest that qihoo asked them to test the product version they tested.
I know that, and I see that as a problem, because on different hardware the relative performance will vary. If you were to list the products in order from least to most system imapct, then that order will vary from one computer to the next. For example, I have seen cases where an antivirus has very little system impact on one computer, but on a different computer running the same operating system and the same version of the antivirus, there have been very noticable slowdowns.
it's still not about trying to correlate performance gleaned from testing with your very specific machines and experiences (this is inclusive of stuff like "different computer same os", etc). it's about taking a bunch of products, installing them in a controlled testing environment, then performing the same operations with each product installed and computing a comparative result of all the products running on the same hardware and os, under the same lab conditions vs a baseline configuration (which, i'd assume is just the test bed without a product installed) the point of the controlled environment is to make sure it can be reproduced, ensuring the result has validity, and that the data is useful. yeah, they provide a pretty good description of the methodology used but leave out the scoring part. it's strange.