AV-Comparatives Retrospective / Proactive Test May 2011 released!

Discussion in 'other anti-virus software' started by clayieee, May 25, 2011.

Thread Status:
Not open for further replies.
  1. MarkKennedy

    MarkKennedy Registered Member

    Joined:
    Jun 16, 2011
    Posts:
    19
    What he said.
     
  2. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    Sorry for the not enough information ;)
    thats why the complementary word to the end
     
  3. firzen771

    firzen771 Registered Member

    Joined:
    Oct 29, 2007
    Posts:
    4,815
    Location:
    Canada
    just out of curiosity, what kind of customers ask about AV-C test results... the one knowledgeable enough to know about AV-C dont generally ask someone to explain it to them and everyone else most likely has never even heard of AV-C.o_O
     
  4. Macstorm

    Macstorm Registered Member

    Joined:
    Mar 7, 2005
    Posts:
    2,642
    Location:
    Sneffels volcano
    Avira & Kaspersky, the highest certification levels :thumb:
     
  5. qakbot

    qakbot Registered Member

    Joined:
    Aug 25, 2010
    Posts:
    380
    Excellent points made.
     
  6. qakbot

    qakbot Registered Member

    Joined:
    Aug 25, 2010
    Posts:
    380
    Agree
     
  7. qakbot

    qakbot Registered Member

    Joined:
    Aug 25, 2010
    Posts:
    380
    Avira has always gotten excellent results on retrospective tests, but in the real-world tests, they always fall flat. Says something about the retrospective tests doesn't it. After all, if they are so good at detection unknown malware using static heuristic, how come those same static heuristics let them down on unknown malware in the real-world tests.
     
  8. tuatara

    tuatara Registered Member

    Joined:
    Apr 7, 2004
    Posts:
    777
    Firzen771 wrote:
    My customers include AV resellers, ICT specialists.
     
  9. Spooony

    Spooony Registered Member

    Joined:
    Apr 30, 2011
    Posts:
    514
    I could show some tests that were done by hackers where they threw they're own malware samples at the different security products. The highest detection rate was 3 percent and they were all updated.
    How about Zeus. 55 percent of the pcs it infected had updated avs. Lets not forget one ran on Amazons EC2 cloud.
    Maybe the guys at av comparatives should get them a couple of Malware Toolkits create their own samples then throw it at the updated av products. But I think someone will moan about something with that as well. So its a no win situation
     
  10. Noob

    Noob Registered Member

    Joined:
    Nov 6, 2009
    Posts:
    6,491
    Knowledgeable customers :D
     
  11. Spooony

    Spooony Registered Member

    Joined:
    Apr 30, 2011
    Posts:
    514
    Another thing that the test shows nevermind who did the best but the gap between paid and free solutions.
    Personally a av is a trail and error affair. Its like a car. There's lots of models out there but you have to test drive them to find that one that's good for you personally. That one that feels just good and you feel comfortable with. I've been thru them all, or most of them and I found two or three models I feel comfortable with. But that's for the avg home user. For business speak with a pro^
     
  12. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    good idea for testing, but not a good idea for security reasons
     
  13. Spooony

    Spooony Registered Member

    Joined:
    Apr 30, 2011
    Posts:
    514
    Good point. Maybe they can change the signatures of some of the samples. I believe testing against unknown threats is the way to go but in a manner that makes it as fair and equal as possible. Updated products on their out of the box settings.
     
  14. kareldjag

    kareldjag Registered Member

    Joined:
    Nov 13, 2004
    Posts:
    622
    Location:
    PARIS AND ITS SUBURBS
    hi,
    Any tester is always sit in an uncomfortable chair, face to face with various technical dilemma and challenge and various ethical conflicts.
    Eugene Kaspersky has said that av testing is an equation without solution.
    This is mostly due to AV security approach and design: black list/signature based detection.
    No tester, no organization has the solution and can prove without contest that this antivirus is better than this one.
    As we use to say in my country, "criticism is easy, but Art is difficult", so i will not focus on IBK test and work, but try to circumscribe the main issues of AV comparative testing.

    1/comparative test requires the same start line for every product.
    Oranges with oranges and apples with apples, even if both are fruits: this means equal features for all products.
    As most av use various and different approaches (signature, HIPS engine, cloud, sandbox), comparative test are unequal in most cases because they don’t show the whole potential of the the AV.

    2/detection rate results and ranking are not an absolute criteria.
    Cohen and Spinnellis has demonstrated that av detection is an NP complete and undecidable problem: av developers have a Sisyphean task.
    VB100 result is 100 % corrupted by default.
    Norton has 98% and Avast 95% ? So what? It does not prove that Norton is more efficient than Avast.

    3/detection tests must be statistically reliable: a simple collection of 100 malwares is not enough, and there is a real need of several Go of malwares of all kinds and ages.

    4/detection rate result must be trusted: for this purpose, there’s a real need of signature extraction scheme for each malware and av.
    But this challenge can only be done with a few samples only due to the time it costs.
    Challenge 3 and 4 are not compatible: even if all skilled people in the world participate to such test, it will requires months and months to complete the test with 100 000 or 500 000 malwares, and during this period, avs have been updated (signature and engines) and new malwares has seen the sun.

    5/test must be independent: recent history has shown how financial dependency and interest conflicts could be a disaster (rating agencies and financial crisis).
    AV comparative tests are ethically less corrupted that VB100 or AV-Test.
    I agree that a test lab has a cost for the storage, the computers or the students, but the money must come from an independent party, not from av editors!
    Interests conflicts are not compatible with Independence.
    Hopefully, independent tests exist, but are quite confidential (govt institute, engineer magazine, military assessment).

    For my concern av comparative tesst as they’re currently done are not helpful for the average user (are only for endless discussion about my av performs better than yours:) ):
    -they do not help the average users to do the right choice in relation with his real needs (located country and language, experience level, budget etc).
    That’s why tests from amateurs and computers mags could be interesting.
    -they do not test unreliable antivirus: by testing more avs, the test will show that there is a list of reliable avs and a list of unreliable avs (Abacre etc).

    There is too much to say, but i suggest to IBK fresh air from the Grossglockner!

    Regards for all in general and Peter Falk in particular.
     
  15. Spooony

    Spooony Registered Member

    Joined:
    Apr 30, 2011
    Posts:
    514
    but the problem is if they do bad in 2 of the tests they pull out and don't want to play with anymore.
     
    Last edited by a moderator: Jun 24, 2011
  16. J_L

    J_L Registered Member

    Joined:
    Nov 6, 2009
    Posts:
    8,738
    @kareldjag: Interesting points. I wouldn't say tests aren't helpful for average users, because how else are they going to find some kind of benchmark on their AVs effectiveness? Sure there are the braggers, but generally I believe these tests provide useful information for the average user.
     
    Last edited: Jun 24, 2011
  17. The Hammer

    The Hammer Registered Member

    Joined:
    May 12, 2005
    Posts:
    5,752
    Location:
    Toronto Canada
    This assertion has no basis in fact.
     
  18. bellgamin

    bellgamin Registered Member

    Joined:
    Aug 1, 2002
    Posts:
    8,102
    Location:
    Hawaii
    Can you cite any demonstrable FACTS to support your slur? If so, let's see them, please. :cautious:
     
  19. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    some products are not using the on access scanner with the same settings as on demand scanners, for performance reasons
     
  20. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    yeah, but their heuristic is not stable over time: sometimes advanced, sometimes advanced+ or standard
     
  21. Macstorm

    Macstorm Registered Member

    Joined:
    Mar 7, 2005
    Posts:
    2,642
    Location:
    Sneffels volcano
    Oh noes .... may I ask then which product performs better in your "real-world" tests?
     
  22. Ford Prefect

    Ford Prefect Registered Member

    Joined:
    Oct 31, 2008
    Posts:
    111
    Location:
    Germany, Ruhrpott
    Nice - could you please provide a link to their paper?

    Regards,
    Ford
     
  23. kareldjag

    kareldjag Registered Member

    Joined:
    Nov 13, 2004
    Posts:
    622
    Location:
    PARIS AND ITS SUBURBS
    Gutentag,

    Detection rate test results are not important. A modern line defense requires a multilayered strategy: that's why i defend since years the need of an HIPS: with products like DW, Geswall, Sandboxie and co, even with the worse antivirus, the users is still well protected against most threats.
    Regarding demonstration of the dead end av detection concept, ZHOU and FILIOL work and publications could also be mentioned.
    Most of them rely to mathematical theories/paradigms/algorithms.
    Regarding Fred Cohen, this is in his thesis...
    Spinellis: "Reliable identification of bounded-length viruses is NP-complete":
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.57.9804&rep=rep1&type=pdf

    Eric Filiol provides an excellent overview of the problem:
    http://www.docstoc.com/docs/2007364...o-Applications---E-Filiol-_Springer_-2005_-WW

    And regarding some corrupted av editors mind noticed earler than expected by AVC, i've related an example of such practises here:
    https://www.wilderssecurity.com/showthread.php?t=293865

    Auf Wiedersehen
     
  24. MarkKennedy

    MarkKennedy Registered Member

    Joined:
    Jun 16, 2011
    Posts:
    19
    If you are in a mood for papers I would direct you to the AMTSO website (http://www.amtso.org/documents.html). Especially relevant to this thread are these:

    AMTSO Fundamental Principles of Testing
    AMTSO Best Practices for Testing In-the-Cloud Security Products
    AMTSO Issues involved in the "creation" of samples for testing
    AMTSO Whole Product Testing Guidelines
    AMTSO False Positive Testing Guidelines
     
  25. qakbot

    qakbot Registered Member

    Joined:
    Aug 25, 2010
    Posts:
    380
    Lets see..

    Bitdefender
    ESET
    FSecure
    GData
    Kaspersky
    Panda
    Symantec
    TrendMicro

    http://www.av-comparatives.org

    Yeah, AVIRA is like BMW. Looks great when you drive it around the block, but in the real-world on the highway when driving it every-day, expect water pump issues :)
     
    Last edited by a moderator: Jun 27, 2011
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.