Usually the better AV's remain on top, but there are still some noticable inconsistencies with recent test results. I imagine that some of it has to do with the test samples that they each choose. For example... I understand many did not like the testing method and samples used in the AV-test.org Test. But after reading the NOD32 thread Andreas explained why he included such samples (justified or not) and in some ways it made sense. But were those samples alone a threat... NO. And the result was many concerned NOD32 users, when they shouldn't have been. Just like I'm sure many people will be concerned when they see FAIL next to KAV and DrWeb in the latest VB test (without even knowing why they failed). Any thoughts or ideas on this area? What would you like to see done (if anything) ? I guess the only thing that you can count on is that all tests are flawed. And that AV test results ALONE do not accurately measure how good an AV is.