So in layman's terms, the test includes nonsense stuff that iare not viruses and then fails a product that doesn't detect them and passes products that do? Isn't that like (was it Cnet's?) the use of simulated viruses in tests which essentially reward AV's for returning false positives and penalized AV's that rightly did not regard them as a threat? And as a consumer this tells me precisely what about a product's effectiveness in real world application? That's what gets me about tests that have fake out stuff like that (if that's what it is in this case): it's misleading to the consumer. So 3% of this test's zoo "viruses" are nonsense crap that aren't badguys and that I'll likely never encounter on my PC and if in the rare case that I do NOD won't waste my time by alerting me to this? Good. But then, why do the testers think it's a good thing to for AV's to ID stuff as viruses when they're not? Reminds me of the time someone recommended an AV to me because "it catches things other AV's don't." So I tried it and sure enough it did "catch" things others didn't: legit program and system files. False positives can be as dangerous as real viruses if it leads a user to delete files they shouldn't. D'oh. So will GEGA IT please explain the value of this sort of testing to a consumer who wants to know WTH this has to do in ascertaining the effectiveness of protection against real viruses in the real (as opposed to academic) world?