[AV-Comparatives] Adware/Spyware/Rogue (PUA) detection test

Discussion in 'other anti-virus software' started by guest, Dec 3, 2009.

Thread Status:
Not open for further replies.
  1. RejZoR

    RejZoR Lurker

    Joined:
    May 31, 2004
    Posts:
    6,426
    That's because the engine doesn't change that drastically. For beta, your engine has to be very stable with only minor changes (if we don't count Behavior Shield which is still under construction).
     
  2. Fajo

    Fajo Registered Member

    Joined:
    Jun 13, 2008
    Posts:
    1,814
    So really in the end this test shows what Avast 5 would do if it was live, simply because the only part that is not active is the behavior shield. It's still using the same definitions as the non beta version.
     
  3. The Hammer

    The Hammer Registered Member

    Joined:
    May 12, 2005
    Posts:
    5,752
    Location:
    Toronto Canada
    When they decide to stop dodging scrutiny and ask to be included in testing. You have to apply to be tested you know.
     
  4. Fajo

    Fajo Registered Member

    Joined:
    Jun 13, 2008
    Posts:
    1,814
    Ahh Kind of like Viper. They have been stating we are "Working on it" for the past what year and half ?
     
  5. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    AV-Comparatives could improve, in my opinion, the realism of this test by:

    1. For missed samples, checking to see if the anti-virus product would have blocked the execution of the potentially unwanted application (PUA). A PUA on a PC that hasn’t been executed has not subjected the user to any risk, and so the most meaningful analysis would be focused on those instances. AV-Comparatives acknowledges this limitation in its report, but failing to consider HIPS and behavior blockers significantly reduces the real-world usefulness of the analysis.

    2. Providing further detail by reporting for each anti-virus product the percent of (a) adware, (b) spyware, and (c) rogue software that was missed. In particular, a user’s risk for the latter two categories is higher, and so a missed sample in those instances is more troublesome.​
     
  6. NAMOR

    NAMOR Registered Member

    Joined:
    May 19, 2004
    Posts:
    1,530
    Location:
    St. Louis, MO
    IIRC the PDF stated that AV-Comparatives used 750K samples, it might be hard to do an execution test with a test bed that size.
     
  7. Fajo

    Fajo Registered Member

    Joined:
    Jun 13, 2008
    Posts:
    1,814
    I would not want to be the tester in charge of executing all of those. I would want to kill my self before the day was done.
     
  8. onigen

    onigen Registered Member

    Joined:
    Oct 26, 2009
    Posts:
    29
    Thanks IBK for keeping up the great work :)
     
  9. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    I agree. That’s why my suggestion was to perform the “execution test” only on those potentially unwanted applications (PUAs) that were missed. If that quantity was still too burdensome, then taking a random sample of that subset and performing the execution test would be sufficient to estimate with a high degree of confidence the run-time blocking performance of each anti-virus product.

    As a side note, it’s a bit silly to run a detection test with 750,297 PUA cases. The performance of each anti-virus product could be estimated well with a much smaller random sample of that quantity.
     
  10. firzen771

    firzen771 Registered Member

    Joined:
    Oct 29, 2007
    Posts:
    4,815
    Location:
    Canada
    maybe, but the larger ur sample set, the higher ur level of accuracy is and gives less room for outliers.
     
  11. NAMOR

    NAMOR Registered Member

    Joined:
    May 19, 2004
    Posts:
    1,530
    Location:
    St. Louis, MO

    I believe that IBK did similar tests a while back under the "single product review" > "Archieve". Some of the behavior tests are old, newest one was for Kaspersky in 2008. Maybe he will do more it he has time.
     
  12. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    Yes, you're right, of course. But the increase in accuracy is not proportional to the increase in the sample size used in the test -- rather, it is proportional to the square root of the sample size. The amount of work required for the test, however, is directly proportional to the sample size.
     
  13. Fajo

    Fajo Registered Member

    Joined:
    Jun 13, 2008
    Posts:
    1,814
    IBK said more tests are coming, I can't wait to see what he has in store. Maybe something that tests all aspects of the program instead of just a few. :cool:
     
  14. flik

    flik Registered Member

    Joined:
    May 21, 2006
    Posts:
    49
    They've left the best for the end of the year, I refer to Dynamic test.
    Great work IBK, thanks
     
  15. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    Yes, an anti-virus comparative that more closely mirrors the experiences of users in the real world would be most welcome:

    Note the admission by AV-Comparatives that non-dynamic tests fail to encompass the “full capabilities” of the anti-virus products. As a consequence, such tests fall short of properly representing the degree of malware protection provided by each product and cannot, in my opinion, be used to effectively evaluate their competitive performance.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.