November Restrospective Test Results

Discussion in 'other anti-virus software' started by Brandonn2010, Nov 19, 2011.

Thread Status:
Not open for further replies.
  1. Technical

    Technical Registered Member

    Joined:
    Oct 12, 2003
    Posts:
    471
    Location:
    Brazil
    Like any other antivirus could complain about disabling parts of it...
    avast WebShield and NetShield were disabled to test only the on demand capacity...
     
  2. Dark Lord

    Dark Lord Registered Member

    Joined:
    Jun 30, 2011
    Posts:
    120
    @Pbust
    The quick solution would be let PAV PRO to participate on Restrospective Test and PCAV to On-demand & Whole Product Dynamic Test.

    Regards,
    Dark Lord
     
  3. bonedriven

    bonedriven Registered Member

    Joined:
    Jan 14, 2007
    Posts:
    566
    The company that bought Malware Defender. Also, their security softwares/tools may be the most used in China at present.
     
  4. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    I don't believe you can do that. Both On-Demand and Retrospective are part of the same test which go hand in hand. You can also see this as the false positives from the OnDemand test are re-used in the Retrospective even though they might have been fixed already long time ago. This is (yet) another problem with the Retrospective test as it uses obsolete FP testing results.
     
  5. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    It is not obsolete if you look at the test date.
     
  6. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    Looking at the test date in the report it says "The false alarm test results were already included in the test report of August. For details, please read the report available at http://.....". What I mean by obsolete is that an FP test from August is included in the Retrospective test of November instead of doing the FP test in November. Those FPs reported in August are now irrelevant in November.

    Additionally in the footnote of the November Retrospective test it says "All discovered false alarms were already reported to the vendors in August and are now already fixed." If that's so and they were fixed in August, what's the point of including them in the November test?
     
  7. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    I do not believe that you are not understanding it. See the header in the report (=August). Of course the FP test has to be done in August if you test the heuristic/products of August.
     
  8. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    Yes I understand it but still think its irrelevant. You are assuming that all FPs are due to heuristics and not signatures. This is not true.
     
  9. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    I think typically it's going to be from heuristics.
     
  10. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    The report has all the detection names printed and as you can see the majority are not heuristic/suspicious.
     
  11. Technical

    Technical Registered Member

    Joined:
    Oct 12, 2003
    Posts:
    471
    Location:
    Brazil
    Thanks IBK for jumping.
     
  12. andyman35

    andyman35 Registered Member

    Joined:
    Nov 2, 2007
    Posts:
    2,336
    The main problem I have with this (and similar) type of test is that it perpetuates the myth that detection directly equates to protection.Unless that detection is 100% then it becomes increasingly irrelevant.
     
  13. Dark Lord

    Dark Lord Registered Member

    Joined:
    Jun 30, 2011
    Posts:
    120
    @ IBK - Yes i think Pbust's point is correct.
    If you want to include FP test on Restrospective test on November then the FP test should have re-done on November itself, without just carring forward August FP test results, that have been already fix now at November.

    And IBK why didn't you provide Restrospective test missed samples to particlur vendors to verifyo_O
    And it's very dissapointing to hear that on other before test where vendors had been provided missed samples and reviewed that some missed files as NOT MALICIOUS !!! which tells us that AVC's virus testbed doesn't contain only purely malware but non malicous files too !!
    This lead us to think whether these non malicous files had also been included in November Restrospective test as because missed samples not been send to vendors.

    Regards,
    Dark Lord
     
  14. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    Interesting. Thanks.
     
  15. Stefan Kurtzhals

    Stefan Kurtzhals AV Expert

    Joined:
    Sep 30, 2003
    Posts:
    702
    I don't think that the customers care whether their FP's were caused by signatures, heuristics, generic detections - or other detections from the cloud.

    On the other hand, the retrospective test really lost it's meaning as many av products rely on their cloud to work properly. For example, Avira will have some automatic FP prevention system based on cloud data - that will not work in this test.

    So the really interesting tests are the "real world" tests, but those also needs to be improved to really match the way how a real world user does actually get infected.
     
  16. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    100% agreed. But this was not my point. My point is that it makes no sense to reuse the FP report from one test in another test 3 months later simply with the argument that those FPs are due to heuristics and are therefore still present in the product. This is simply not true. Not only because many FPs nowadays are due to sigs but also, like you said, whitelisting from the cloud (we have this also) is not allowed to do its thing either.

    Also 100% agreed. Most if not all real-world/whole-product tests don't replicate the true infection vector (drive-by exploit, spam message, etc.). They simply point to a URL/malware.exe and run it from the URL. This does not allow anti-exploit or anti-spam measures included in products to do their thing.
     
  17. Stefan Kurtzhals

    Stefan Kurtzhals AV Expert

    Joined:
    Sep 30, 2003
    Posts:
    702
    Of course the exploits are very important and needs to be included in the testing urgently. But I think with the many rogues/fake avs around, there is also an increasing number of cases were the user really just downloads the exe and runs it - no exploits involved. I don't have any number, but I would estimate this number at 10-20%+ of the infections coming from the internet.
     
  18. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    never stated this. additionally, you can not distinguish whether an FP is from signature or heuristic by looking at the name. that's foolish. I suggest readers to indeed have personally a look at the report with the detection names. the fp test was WITH cloud, so you had your whitelisting. btw, the test is not "3 months later"; only the release of the report is later. the test is like if it was done in August at same time. Just read the report to understand.

    I think you know that over the half of the test cases are exploits/drive-by downloads. practically all products are good at blocking them, which is why over 90% of the misses you usually get are links pointing the malware. I guess this is what you experience with other labs too.
     
  19. kareldjag

    kareldjag Registered Member

    Joined:
    Nov 13, 2004
    Posts:
    622
    Location:
    PARIS AND ITS SUBURBS
    hi,

    As i am far from Alzheimer syndrome, i will not repeat here what I've said here many times (av tests is an equation without solution, AV detection is an N-P complete problem etc and blah blah blah :) ).
    But one criteria is required for any comparative tests (from cars to avs): products need to be on the same start line.
    This means comparisons of the same features.
    I'm not agree with the few lines on the PDF where some features are just considered as a "plus".
    And it's not equal to compare products without testing them with ALL FEATURES AND POTENTIAL (i am not an av fan, but i respect their work and efforts to improve their products).
    The reaction of Panda team is legitimate if we consider the marketing impact of this test: average users are only interested in the overall and podium, not in the methodology and Nota Bene...
    As there is no signature extraction scheme, how can the testers can be absolutely sure that the detection is an ID for the malware instead of an ID for the packer/crypter?
    Are testers sure that Entry Point is not obfuscated?

    More over, it is known that most knew malwares are made in China for a big part and in eastern europe for the rest.
    oohoohh...this explains the place of Qhioo...from the country of panda...
    If more malwares have been taken from the hispanic underground community (foro.elhacker and co), Panda would score more with no doubt.

    This things said, criticism is easy, but art is difficult:) ...
    I stiil believe that IBK and AV-Comparative always try to provide as honest and serious tests as they can.

    Rgds
     
  20. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    In our cases I can definitely tell which ones are heuristic and which ones are sigs. And from other engines I have a pretty good idea. But I do agree that regular users won't be able to distinguish.

    As for the whitelisting from the cloud, you are right it was included in the detection test. My point is that the FP testing should be re-done instead of re-used. Unfortunately if the FP test were to be re-done in the Retrospective then it wouldn't be able to connect to the cloud whitelisting but that's less of an issue than the problem of re-using the FP results.

    But the re-using of FPs is just one of the issues of the Retrospective test. My points above about providing missed samples and cloud-based heuristics still stand. I can understand not using cloud-based heuristics obviously as its an offline test and can accept a drop in the pretty graph because of this (even though our marketing people will not agree with me), but I still don't understand why we cannot evaluate the missed samples of the Retrospective when in every other test out there it is standard practice to provide missed samples.

    As for the real-world stuff, there's more infection vectors than just drive-by and direct to exe URLs (like spam, FB posts, etc.) but that's another discussion. But like I said there's some real-world tests that do this better and some that do this worse. Not all are the same.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.