VB100: RAP averages quadrant test Feb - Aug 2016

Discussion in 'other anti-virus software' started by anon, Dec 5, 2016.

  1. anon

    anon Registered Member

    Joined:
    Dec 27, 2012
    Posts:
    8,427
  2. Behold Eck

    Behold Eck Registered Member

    Joined:
    Aug 23, 2013
    Posts:
    579
    Location:
    The Outer Limits
    Thanks for the update anon.

    I see that there are no magic 100 percenters at VB100 unlike some other testing sites ?

    Regards Eck:)
     
  3. guest

    guest Guest

    Reading the test I understand this is like real time test, right?
     
  4. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    RAP test is actually a measure of real-time detection combining proactive + signature + cloud defense. I consider it very accurate in comparison to AV-Test or AV-C, because it does not have an emphasis on a specific kind of detection. In general case anything that does 85% or better on RAP should be fine for most kinds of users.
     
  5. ance

    ance formerly: fmon

    Joined:
    May 5, 2013
    Posts:
    1,359
    100 % is ridiculous, it will never be possible.
     
  6. Behold Eck

    Behold Eck Registered Member

    Joined:
    Aug 23, 2013
    Posts:
    579
    Location:
    The Outer Limits
    Yeah and the graphics give an instant headsup as to everyones performance at a glance.:thumb:

    Regards Eck:)
     
  7. avman1995

    avman1995 Registered Member

    Joined:
    Sep 24, 2012
    Posts:
    944
    Location:
    india
    If you test 20 samples, and 19 fail... you should be asking "is there anything wrong with my sample selection?". "Did I actually choose 19 samples of the same malware family?" etc.
    There are hundreds of thousands of samples out there that the particular product doesn't detect, no matter what product it is... so unless you're pretty damn sure that your selection is random, well representing the overall situation (or not - sure, you can be interested in your local malware, in which case the sample set should represent that, of course) - then no, the result is meaningless, and the correct reaction is "so what".

    This seems to be a problem with most small and big sample set.Do the testers filter their sample set?? This problem also arises with home grown tests.As usual Its good to perform well at av-c and these tests even if I have my grips these organizations seem well rounded since they do these tests for years.Never a big fan of these though.Still interesting stuff to see graphs and detection ratio fluctuate from month to month even if sample set may differ.Also it gives us something to talk about. ;)
     
    Last edited: Jan 10, 2017
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.