Spotlight on security: Why do AV products score so highly in professional tests?

Discussion in 'other anti-virus software' started by anon, Aug 29, 2018.

  1. anon

    anon Registered Member

    Joined:
    Dec 27, 2012
    Posts:
    8,010
    Spotlight on security: Why do AV products score so highly in professional tests?
    27 August 2018
    https://www.av-comparatives.org/spo...e-so-highly-in-professional-tests/#more-26676

     
  2. guest

    guest Guest

    Some points mentioned are indeed legit, but i don't like the fact that they assume than all Youtesters are noobs, and labs are honest professional playing in a higher league.
    I know some "home testers" having access to serious 0-days even 0-hours, having enough knowledge on how malware are delivered to get results as good as labs; however they don't do videos, they don't look for fame.

    Now about labs:

    - how can i know that labs aren't biased and cherry picks the samples, after all they are used as marketing platform for vendors; if their tests show bad results (let say 50% or less), i bet most vendors won't participate anymore.
    - following the point above, how us readers can verify the legitimacy of the test? We don't have access to the samples and we don't know their age, even the methodologies aren't very clear.
     
  3. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    quote from the blog post (it does NOT say "all Youtesters are noobs" and it also encourages users to try out the software):
    "However, we would like to point out that some YouTube testers may publish their reviews with the best of intentions and contain useful insights into e.g. the GUI of the product. We encourage users to install a trial version of any AV product they are interested in before making a purchase, so that they can decide for themselves whether it fits their own personal requirements. We also suggest you don’t rely blindly on one single test report, even from reputable independent test labs. Two or three tests, covering all aspects of a product’s protection and performance, will give a more complete picture, and a cross check using a test by another reputable lab might not be a bad idea."
    I would probably have written more clear that there exist also interesting YT tests, but I think it is obvious enough the way its written.

    BTW, we will update the FAQ on our website, so if you want to submit questions, we can add answers by next year into it (unless we feel that the answer is already written in the reports/website). Please send us your questions by using the contact form on our website.

    P.S.: those who know me since the beginning (~20 years ago) know that I was always very suspicious of the AV and marketing in general, which is why the testing started (I did not trust the numbers shown in media or of other testing labs). So, while I fully understand when some ppl have trust issues, I also feel quite insulted* when I read that some ppl think we provide biased tests or have bad intentions (*no worry guest, I know it was not meant as insult but just a legit question/opinion from your side ;)). Depending on the type of test we do, you may still see scores of 50% or less. But in the tests which use all protection features (like WPDT) and focus on prevalent malware this is unlikely to happen (at least with the products that are usually tested in public tests by the various labs).

    That testing labs do not provide certain info to the public (like used samples etc.) has good reasons, which were discussed already years ago - see e.g. https://www.virusbulletin.com/virusbulletin/2008/12/repercussions-dynamic-testing
    "Revealing per-sample test result details is a much more dangerous idea with dynamic testing than it is with response time testing. While there is a low-to-moderate risk in revealing too much detail with response time testing, the risk is very high with dynamic testing.
    Having a limited sample set for testing means that the samples tested need to be very relevant. If testers are going to publicize the results for each such important sample, including how individual products perform against them, then this is extremely valuable information for the malware authors. It will show them against which products they need to improve their creations.
    Virus Bulletin is not yet publishing dynamic testing results, but plans to us e information from its (upcoming) prevalence table to pick samples for testing. While the testers will start out just by mentioning malware families, they may end up disclosing specific malware names as well [10].
    AV-Comparatives is also not yet publishing dynamic testing results, but intends to publish the names of the samples being used for its future dynamic tests.
    As pointed out, this approach should be avoided. AV-Test, which is already performing dynamic tests, takes a better approach. Magazines are prohibited from disclosing the malware names or hashes of files that were used in the test. However, AV-Test will share the hashes or samples with the AV vendors that participated in the test [10].
    Though slightly less transparent for end-users, this approach is by far preferable in terms of risk mitigation, while also allowing for any vendor to notify the tester if they find that any non-relevant samples have been used in the test set."
     
  4. guest

    guest Guest

    - i think scriptors should be used more often in tests. Usually AVs are weak against them.

    You surely know that malware coders use underground equivalents of Virus Total, there are several of them. Malcoders don't need labs reports to know how their malware performs.
    Not saying they can use the trial of a product, update it then cut the internet so the sample won't be uploaded.
    So at the time you disclose the malware hash name, they are already coding the upgraded variant.

    If malware name/hash can't be disclosed, at least the exact age of the samples set can, there is a big difference of how a product react to a 0-hour and a 0-day or 2-days.
     
  5. Gein

    Gein Registered Member

    Joined:
    Dec 8, 2013
    Posts:
    219
    I remember reading a test sort of similar to that, where the auto-update feature was turned off for a set amount of time before new samples were run. I can't remember if it was Av-Comparatives that did it or AV-test, but there were protests from several vendors. Not strictly the same thing as looking at how a fully up to date product performs against a 0-day but interesting nevertheless.
     
  6. m0unds

    m0unds Registered Member

    Joined:
    Nov 12, 2015
    Posts:
    219
    mrg-effitas does one like that these days as part of their std testing for enterprise products called a holiday test. can't remember the length of time the products go without updates, though
     
  7. anon

    anon Registered Member

    Joined:
    Dec 27, 2012
    Posts:
    8,010
    VB100 RAP Test (Reactive and Proactive)
    -----------------
     
    Last edited: Aug 31, 2018
  8. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
  9. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,885
    Location:
    Slovenia, EU
  10. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Several reasons, one of them being that it was amongst the least requested tests from users according to our survey.
    We still do such tests if commissioned, but not as part of our regular testing anymore.
     
  11. shmu26

    shmu26 Registered Member

    Joined:
    Jul 9, 2015
    Posts:
    1,550
  12. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,885
    Location:
    Slovenia, EU
    @IBK thnx for explanation.
     
  13. cruelsister

    cruelsister Registered Member

    Joined:
    Nov 6, 2007
    Posts:
    1,649
    Location:
    Paris
    Very nice topic! My compliments to Anon for starting it as well as to IBK in taking part of this discussion.

    Although I initially thought I would hate what was written, instead I found a thoughtful and carefully worded (for the most part) legitimate critique. Using the term "Most" vs "All" in describing YouTubers pretty much will describe a bulk of the testing seen by those without a .org website.

    However I (obviously) must disagree with a few things:

    1). Having a certain YouTube testing link (which I will NOT repeat) under the "useful" insight link was pretty laughable, as the methodology used at this site demonstrates the failings in Amateur testing that is enumerated in the article almost point by point.

    2). The statement "We encourage users to install a trial version of any AV product they are interested in before making a purchase, so that they can decide for themselves whether it fits their own personal requirements" really should have been left out. On what basis can the typical Home Use determine that the Security Solution so trialed is any good? This is like telling someone just to buy an automobile and drive it without first reading professional testing sites and learning that the engine seizes and the tires will fall off within the first month.

    All the Home user can determine is that the product is "light" (Oh God, KMN) and that the GUI is pretty.

    3). "Some YouTube testers might even write their own malware (which could be considered as unethical and in some countries also illegal); beside the fact that the self-written/artificial test malware would (hopefully) never be seen in the wild (and therefore not represent the real-world)"

    Actually as long as the malware so written is not distributed it is neither unethical nor illegal. Personally I think that it is actually smart as it will give the closest test of a true Zero-day sample that one can get.

    guest (as usual) makes the following valid points-

    4a). Scriptor protection in the bulk of Home security products is very poor, and I would also like seeing more in pro tests instead of concentrating on the typical riff-raff like ransomware and downloaders.

    4b). We really also need to see the actual age of the sample used at the time testing is done and if all products are tested against these samples are done simultaneously. Acquiring a true zero day sample, then letting it sit for a day or two before testing will really skew the testing results (somebody or other on Utube made this point recently).
     
    Last edited: Aug 31, 2018
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.