VBA32 and KAV, same bases ?

Discussion in 'other anti-virus software' started by Mack Jones, Aug 29, 2005.

Thread Status:
Not open for further replies.
  1. SDS909

    SDS909 Registered Member

    Joined:
    Apr 8, 2005
    Posts:
    333
    What purpose would this serve? I don't publish test results on a web page, and I don't offer anything more than some basic opinions based on my findings in our day to day operations. (and my opinions are purposely vague) Therefore my creditials are totally irrelevant to this discussion because i'm not soliciting for affirmation of any test or result.

    You on the other hand, publish results on a web page, frequent security forums, and seem to have an opinion of yourself as the consumate antivirus expert. As such, i'd expect to see a detailed list of credentials, certificates, training, and a background working in the industry. Otherwise, you should simply label your tests for what they are - hobby testing - and a disclaimer pointing this out along with the test.

    This isn't an attack on you personally, i'm merely pointing out your own admitted discrepencies and biases, and the fact that you are a hobby tester, and your test results should be considered with these things in mind. Nothing more, nothing less.
     
  2. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Which bias?? seems like you do not WANT to understand it, so i will stop to discuss with you about this, as you will always start the same. the av companies accredit me (and being accepted by av companies is not something everyone can get). so i do not understand why you start to attack us, are you maybe unhappy that your "kobra tests" are not recognized by the av community?
     
  3. Stan999

    Stan999 Registered Member

    Joined:
    Sep 27, 2002
    Posts:
    566
    Location:
    Fort Worth, TX USA
    Just typical and unfortunately expected sour grapes reactions when someone's current favorite AV didn't do well on a test.

    I, for one, appreciate the considerable amount of effort, time and expense Andreas Clementi provided in producing the results and additional details for this On-demand comparative using a large number of signatures.
     
  4. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    It was new to me that those Jotti's statistics were from you, sorry! But maybe you still missunderstood what I meant. In my attached picture from certain Jotti's scan, there is written:

    > (Note: this file has been scanned before. Therefore, this file's scan results will not be stored in the database)

    I thought that those detecting rates in Jotti's were based on this assumption just mentioned above, nothing else. If this still is true, I think that those stats looks like that because the top 3 or 4 av:s in Jotti's have also the top update frequencies per year, so they'll add the newiest nasties first which are scanned by HOME users and not by Corporate specialists as the VirusBulletin adds their samples first.

    Best regards,
    Firefighter!
     

    Attached Files:

  5. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    if the file is e.g. one time uploaded as ZIP or rar file etc. and the second time (e.g. the next day) unarchivied, it will be counted as new file and the results will be displayed (stored in database).
    but yeah, your point may be true, probably those that update more frequently will score better there. it may be also true that the scanners which detect more spyware/adware etc. will score better at jotti (as most samples uploaded there are from that categories)
     
  6. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    That's true at least when we are comparing VirusTotal against Jotti's. Just checked some Adware samples with DrWeb in VT and Jotti's. They were detected in Jotti's but not in VirusTotal. Maybe VirusTotal doesn't use those risky/nasty beta defs of DrWeb but Jotti's does! :oops:

    Best regards,
    Firefighter!
     
  7. SDS909

    SDS909 Registered Member

    Joined:
    Apr 8, 2005
    Posts:
    333
    This is why I trust VBA32, heavily... I ran across a particularly nasty adware a few minutes ago, i'd go so far as to catagorize it was a trojan downloader. Of course, VBA fired off a warning - good ole' trusty VBA32..

    Time and time again, hundreds of weekly samples, VBA32 is the only one detecting this stuff.. This isn't luck, these aren't hand chosen samples, these are real threats on a honeypot machine. 85% detections or less with VBA32? As i've said before, it has never scored less than 95% on anything i've thrown at it, including Zero-Hour outbreak files.

    http://www.boredmofo.com/downloads/newthreat9812.JPG
    http://www.boredmofo.com/downloads/newthreat9813.JPG
     
  8. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Just a comment from a distant observer - everyone in the thread is focusing too much on numbers without asking whether the seemingly contradictory results and more casual usage impressions can both be correct. Of course they can!

    If you're launching a new AV today, or reaching for wider geographical exposure, covering the threats current and active today is paramount. Your attention will be focused on what people are being exposed to. That is what will make or break you in the word-of-mouth market. Would it be nice to cover all known malware? Sure, but internal resources are limited and you likely have excellent access to currently circulating threats through many of the multivendor malware submission sites on the net. In addition, new users experiencing older malware guarantee that you will be able to hook into legacy malware which is actively circulating as time passes. Net result - excellent field-use detection characteristics.

    Say you have an exceptionally comprehensive collection of both zoo and ITW malware. You challenge the AV described above with that collection. Depending on the circulation statistics of the zoo samples, that AV could perform anywhere from admirably to dismally simply because it has potentially been challenged with samples it has never encountered. Net result - detection characterisitics could be anywhere.

    So what's this all mean....? Basically understand what the these tests can tell you and what they can't. The tests can confirm excellent performance (the 95+% products are good), on the other hand they do not demonstrate poor/marginal performance in the field (the <85% products are not necessarily bad). For those wishing for black and white delineation, this may be somewhat disconcerting, but this is forced by the comparative nature of the test vs. the real world.

    Personally, I see no necessary conflict in a < 85% detected test result and anecdotal field observations which suggest exceptionally good performance. It an almost forced situation for products early in their lifecycle.

    Blue
     
  9. Tweakie

    Tweakie Registered Member

    Joined:
    Feb 28, 2004
    Posts:
    90
    Location:
    E.U.
    Of course, you realize that this scheme also applies to av-comparatives. Since you do provide undetected samples to the companies after the tests, they could decide to add it to their signatures in priority before the next test.

    However, there is an easy way of detecting that :
    - from the graphs that you published on page 3 of the test report, you can easily compute the improvement rate for each AV.
    - Then, you can also compare the improvement rates (between this test and the previous on-demand test) for the malwares that have been added to the test set between two tests (don't forget to remove the malware that have been provided by the vendor and the malware that have been used for the proactive tests, since vendors know that you will use it).

    If for a particular AV these improvement rates are too different (i.e. improvement over the (large) already-used test set is higher than for the (smaller) new test set), you can deduce that the AV vendor is cheating a little bit.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.