AV Comparatives Sep 2013 File Detection Test

Discussion in 'other anti-virus software' started by Inside Out, Oct 14, 2013.

Thread Status:
Not open for further replies.
  1. SecurityChef

    SecurityChef Registered Member

    Joined:
    Oct 9, 2013
    Posts:
    9
    Location:
    Florida
    Every piece of malware that could be executed was in fact executed. This resulted in the trace or component files that are also included in the test results PDF.

    These components or traces include downloader scripts, login items, backdoors and 'listeners' waiting for incoming commands from C&C servers. I'd definitely count them as relevant. Keyloggers, hacking tools etc. can all result in serious damage to the infected person whether it's data loss or financial loss, these tools should not be excluded from any test but taken as serious as other malware.
     
  2. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    @IBK:

    Behavioral tests are run only once a year, It should be 2 per year
    I believe AV-Comparatives is moving in that direction, as I can see some type of hybrid approach in these behavioural test (Static/Runtime)
     
  3. bigc73542

    bigc73542 Retired Moderator

    Joined:
    Sep 21, 2003
    Posts:
    23,934
    Location:
    SW. Oklahoma
    McAfee and Panda number 2 in detection. pretty good score.
     
  4. mattdocs12345

    mattdocs12345 Registered Member

    Joined:
    Mar 23, 2013
    Posts:
    1,892
    Location:
    US
    Forti like always did well. Avast has been on a decline in my own experience as well.
     
  5. explanoit

    explanoit Registered Member

    Joined:
    Dec 1, 2013
    Posts:
    28
    Location:
    USA
    I can accept Webroot's explanation regarding their detection performance due to their different approach, but does anyone know why they fail so hard at FPs? Have they ever addressed that publicly?
     
  6. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    Cherrypicking a specific bad FP test isn't exactly showing a trend. We've had some cases where edge conditions are triggered causing a large number of false positives in one single instance, but this isn't something a real customer would run into - it's due to dynamic flexing and entering an "alert mode" when a large number of new infections are found in a short period. We've corrected the backend logic for this, and zero real customers have ever been affected by them. We do have FPs from time to time but never on popular software or components.
     
  7. explanoit

    explanoit Registered Member

    Joined:
    Dec 1, 2013
    Posts:
    28
    Location:
    USA
    Thanks for the reply. I thought I had seen a trend of Webroot FPs across multiple tests and test providers, but the last time I looked into them was during the initial launch. If that's been cleaned up I'll take your word for it.

    At work I do occasionally need to submit FPs but you're right in that they are obscure or even in-house software.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.