AV-Comparatives: Heuristic / Behaviour Test March 2015

Discussion in 'other anti-virus software' started by FleischmannTV, Jul 3, 2015.

  1. FleischmannTV

    FleischmannTV Registered Member

    Joined:
    Apr 7, 2013
    Posts:
    1,071
    Location:
    Germany
  2. anon

    anon Registered Member

    Joined:
    Dec 27, 2012
    Posts:
    4,095
    http://www.av-comparatives.org/retrospective-test/
    Heuristic / Behaviour Test March 2012 = 18 Products were tested.
    Heuristic / Behaviour Test March 2013 = 17 Products were tested.
    Heuristic / Behaviour Test March 2014 = 13 Products were tested.
    Heuristic / Behaviour Test March 2015 = 12 Products were tested.
    Heuristic / Behaviour Test March 2016 = ? Products were tested.
    :isay:
     
  3. StillBorn

    StillBorn Registered Member

    Joined:
    Nov 19, 2014
    Posts:
    162
    First thing I noticed was the decline in vendor participation as well.
     
  4. Brandonn2010

    Brandonn2010 Registered Member

    Joined:
    Jan 10, 2011
    Posts:
    1,849
    Hey Vipre did bad again! I don't remember them being this bad when Sunbelt owned them.
     
  5. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,819
    Location:
    Innsbruck (Austria)
    You forgot:
    2010: 15
    2011: 12

    Since the the test started to include also behavioral component capabilities (in 2011), some vendors which do not have strong behavior blockers or such features prefered to do not opt-in. As you can imagine, most vendors do not like tests in which they score low (as it can happen in this type of test).
     
  6. anon

    anon Registered Member

    Joined:
    Dec 27, 2012
    Posts:
    4,095
    No, I just mentioned the latest years.
    Do you think that I had to start from 2004?

    "Elementary, my dear Watson IBK."
     
  7. Fabian Wosar

    Fabian Wosar Developer

    Joined:
    Aug 26, 2010
    Posts:
    787
    Location:
    Germany
    Given how big of a disadvantage the current methodology is for some products, I am not surprised. We may do just as well next time as participation is optional.
     
  8. haakon

    haakon Registered Member

    Joined:
    May 25, 2015
    Posts:
    761
    Location:
    SW USA
    Caution: this post contains more than 140 characters.

    All that is if you're in the camp, as many are not, that thinks this kind of test accounts for anything - already debated above and profusely in other threads and forums. Anyhow...

    It's interesting to note how the results for the five Bitdefender-based products compare to the outstanding Bitdefender IS itself. For example, Lavasoft's Ad-Aware FREE is down 46.5% from Bitdefender IS, clearly where the latter's AVC, Actrive Virus Control (And maybe the firewall IDS?), takes up where B-Have (Bitdefender engine heuristics) leaves off.

    Oddly, eScan IS uses AVC but is down 6.5% from BDIS.

    (For some unfortunate decision much to its detriment, where a robust implementation of BD's SKD is employed, Lavasoft chooses not submit Ad-Aware Pro Security with AVC and Bitdefender's firewall IDS. And, of course, the Bitdefender engine. No doubt far less "red" would have been returned.)

    F-Secure IS, which does not (to the best of my knowledge) have AVC, is down 6.0% with it's user dependent accounting for 0.2%.

    Down by 0.3% from Bitdefender IS, Emsisoft AM is saved by it's stellar 45.6% user dependent. Bullguard, down by 4.1% is strengthened by it's user-dependent at 8.8%.

    Finally, comparing the four free products - seems like you get what you pay for. :doubt:
     
  9. Technic

    Technic Registered Member

    Joined:
    Aug 31, 2005
    Posts:
    428
    Protection versus performance impact (slowing down common PC operations)? Is your PC test bed? Just asking...
     
  10. chillstream

    chillstream Registered Member

    Joined:
    Aug 2, 2013
    Posts:
    49
    Location:
    Croatia
    Honestly, I'm not that surprised by avast's very poor result in this test, although they brought it upon themselves by not enabling Aggressive Hardened mode as default. It's a very mature component by now and I hardly ever see its prompts for valid applications.
     
  11. FleischmannTV

    FleischmannTV Registered Member

    Joined:
    Apr 7, 2013
    Posts:
    1,071
    Location:
    Germany
    File reputation is cloud dependent (with every product) and this test was conducted offline. Hence a product can neither use reputation to block a threat, nor to reduce false alarms or user dependent decisions. Thus the products in this test performed considerably different than they would in an actual environment.
     
  12. chillstream

    chillstream Registered Member

    Joined:
    Aug 2, 2013
    Posts:
    49
    Location:
    Croatia
    Was not aware of that. Thanks.
     
  13. roger_m

    roger_m Registered Member

    Joined:
    Jan 25, 2009
    Posts:
    5,242
    In which case, in my opinion they should have done a second online test, or even just done an online test solely.
     
  14. FleischmannTV

    FleischmannTV Registered Member

    Joined:
    Apr 7, 2013
    Posts:
    1,071
    Location:
    Germany
    The only problem with that is that the product can then use the signatures in the cloud or update its database (and more) and doesn't need to rely on behavioral analysis and heuristics. The methodology is flawed, but it's hard to conduct a true heuristic/behavioral test without making it another real-world test.
     
  15. roger_m

    roger_m Registered Member

    Joined:
    Jan 25, 2009
    Posts:
    5,242
    @FleischmannTV I didn't consider that, and with that in mind I can see why the test was conducted offline.
     
  16. ArchiveX

    ArchiveX Registered Member

    Joined:
    Apr 7, 2014
    Posts:
    1,016
    F-Secure :thumb:
     
  17. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    bitdefender Active Virus Control is suspiciously good in this test (without the sophisticated FP rate schema of realworld test)
     
  18. Azure Phoenix

    Azure Phoenix Registered Member

    Joined:
    Nov 22, 2014
    Posts:
    359
    I see. Thank you. That would explain Emsisoft's result since the product doesn't have access to its Network
     
  19. vlk

    vlk AV Expert

    Joined:
    Dec 26, 2002
    Posts:
    618
    Not only that the test is conducted offline, but it is also an on-demand scanning test only. The "Hardened mode" in Avast applies only to execution, of course (i.e. the "on-access" component).
     
  20. siketa

    siketa Registered Member

    Joined:
    Oct 25, 2012
    Posts:
    2,695
    Location:
    Zagreb, Croatia
    @Azure:True. Lots of malware would be autoquarantined that way.
     
  21. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    Having said that it would represent a good Advantage for most products involved, taking into account the focus of the FP set
     
  22. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    According to the report, is a mix of demand/access/execution. To what extents each one in %?
     
  23. Fabian Wosar

    Fabian Wosar Developer

    Joined:
    Aug 26, 2010
    Posts:
    787
    Location:
    Germany
    I don't think they cheated in any way. But I do believe that the miss numbers of products may be inflated. When you check the methodology you will notice that the sample selection has changed this year. AVC used to limit malware samples to one per family. So the same malware being recompiled 20 times will only be included and counted once. This time however, AVC decided to include multiple variants of the same malware family and count each variant. Of the 19 samples we missed for example, 14 were essentially the same malware, just slightly repacked. Depending on the distribution of the overall set, missing just one malware family that was particularly popular, could potentially result in a large number of missed files. Because of that, judging the results becomes incredibly difficult. Bitdefender for example let 15 samples compromise the system. These could be 15 different families, which would make them arguably worse than the 6 families we missed. It could also be just one family that had 15 variants in the test set.

    Personally I can see why AVC may want to allow multiple variants per family. Missing a high prevalent malware family is arguably worse than missing a malware family that is a lot less common. So building the sample set based on prevalence data only makes sense in that regard, as malware families that are very active will naturally be more present in the resulting test set. I just wish they would also continue to provide the old metric. Introduce a second column that counts misses based on unique families represented in the test set. That way results are a lot easier to compare.

    Samples were executed. So it was not just an on-demand scan.
     
  24. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,819
    Location:
    Innsbruck (Austria)
    wrong. files are EXECUTED.
     
  25. true indian

    true indian Registered Member

    Joined:
    Sep 24, 2012
    Posts:
    755
    Location:
    india
    Even if the test was conducted offline and even if the files were executed.There is a HUGE disadvantage as most AV programs have a major part of their technology in cloud.

    For a up-to date avast,evo-gen is the only offline on-execution technology.And since this version doesn't have some of the stuff the beta has.It is missing its big guns when offline like FileRepMalware,FilereMetagen,FilerepSnxClass,Filestring and some of the other online components.Even some part of the the deepscreen stuff needs online support.

    I have been using the new avast! beta and the HIPS idea looks like a HUGE support for the other components when offline.
     
Loading...