AV-Comparatives Anti Trojan Test ...

Discussion in 'other anti-trojan software' started by Infinity, Mar 24, 2006.

Thread Status:
Not open for further replies.
  1. Blackcat

    Blackcat Registered Member

    Joined:
    Nov 22, 2002
    Posts:
    4,024
    Location:
    Christchurch, UK
    :thumb:

    Fortunately, this "test bashing" response is limited to only a few security vendors.

    Maybe someone from the TH camp can explain to the amateurs here, why the on-demand scanner of Ewido apparently detects a lot more samples than TH in this test.

    Further, I would have thought that the TH forums either at CastleCops or over at the official site would have discussed the results by now!
     
  2. Magnus Mischel

    Magnus Mischel Security Expert

    Joined:
    Oct 24, 2002
    Posts:
    185
    When you politely ask for the samples TrojanHunter missed in the test and get the response that providing them is "forbidden" you kind of raise an eyebrow. When you reply and ask who is forbidding this and learn that it's the antivirus companies who have provided the test samples then that tells me that the test is not unbiased.

    Anyone who has taken even the most basic introductory course in statistics knows that to get an unbiased test result you must collect samples at random. Having others provide the samples, and antivirus vendors in particular, will give a result that is about as far fram unbiased as you can get. And it will obviously favor any scanner that uses the methodology of "scan files with a couple of virus scanners and add signatures for anything that the virus scanners label as Trojan.*".
     
  3. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,891
    Location:
    Innsbruck (Austria)
    its forbidden because I do not trust companies with "low" detection rates with any samples. you have to gather your samples elsewhere.
    i do not know who your teacher of statistics is, but to get unbiased results, you have to try to gather all existing malware to know how the overeall rate is. best sources in order to get all malware are (but not only) also av companies.
    Now tell me, what is biased in the AT results if no one from the AT vendors got any sample from me? Are you trying to say that TH would be in another test better than the other products? Or are you just not able to interpret the test results?
     
  4. Magnus Mischel

    Magnus Mischel Security Expert

    Joined:
    Oct 24, 2002
    Posts:
    185
    No, sorry, to get an unbiased test result you must collect the samples at random. Even the 4 credit statistics course I took for my MSc explained this with great emphasis. If you want to do a survey of the mean income of households you could walk around the poor neighbourhood of town for several months asking people but the results would still be biased (because you only asked the household with below-average incomes).
     
  5. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,891
    Location:
    Innsbruck (Austria)
    sure, because it would be not complete.
    well, i think i have more years of statistics behind me than you. if you want to know e.g. the mean income of peoples in australia, you have to ask all peoples of australia, and not just picking out some randomly. If you can not get all peoples, than you have to pick them randomly because you do not have other choice.
    but i understand that you want to try to discredit a test just because your product scored bad in it. In contrary to you, I have no reason to bash TH or anyone else.
     
  6. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    There are many types of bias in the strict technical sense of the word. Bias per se is not necessarily bad, it can represent the application of a priori knowledge. In fact, in this test bias is necessary since correct classification of subjects is critical. This is a subset challenge test. To take the income analogy, you're looking for the income of a defined demographic (say unmarried men in the age range of 25-40...)

    Simply stated, this is not necessarily true. This sample could as easily reflect a random subsample of the extant trojan population available. From a trojan harvesting perspective, it could be a perfectly random population.
    As is the case with tests presented by casual testers here and elsewhere, this is a germane comment. As with other test discussions carried out here, whether a product flags a given file depends on a number of factors including technical decisions made by the tester and the vendor for a variety of reasons.

    As always, it is easy to overinterpret test results, particularly one of this nature in which a database derived from a variety of general antimalware packages is used to assess specialized tools. Since the subject here is a tool targeting a subset of malware, the details of classification and categorization criteria matter, this is where the potential bias creeps in. It may seem like a fine semantic point, unfortunately it's not. A disconnect on primary definitions and all bets are off. While base sampling may be an issue, focusing clearly on what is in and what is out of scope is equally critical.

    This is not as much of an issue with general antimalware packages since all malware is within scope. Naturally, what one considers or does not consider malware in a general sense matters, but local differences in categorization don't matter (I assume a hit is a hit whether the category was correctly tallied or not - correct?)

    Without having technical details trotted out before us, which will invariably go over most our heads (well, at least my head) and inappropriate here, it is rather difficult to decide what to make of this.

    As someone who uses a competitive product that was not examined for lack of an automated file scanner, about the only thing one can do is appeal to personal experience. I know the product I use works, I have seen it in action, and that action has been backing up what I consider the two top AV's on market. I would trust there are similar experiences from TrojanHunter users. I wish my recommendation were more objective, but I fear it is not at this point.

    Blue

    PS - For the record and just so we don't have to trot out our university transcripts, I'm fairly certain that I have more years of statistical sampling and exploratory data analysis than both of you, combined that is. Unless, of course, both of you are a lot older than I think... :)
     
  7. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,891
    Location:
    Innsbruck (Austria)
    :thumb: agree with you Blue.
    Yeah, I know your other posts and you showed to know much about statistics (probably even more than me), fortunatly my professor still keeps an eye on what I do, so in case I am unsure about something, I know who to ask ;)
     
  8. Happy Bytes

    Happy Bytes Guest

    Basically you do not have any other option :D
     
  9. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,891
    Location:
    Innsbruck (Austria)
  10. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Actually, if you contact my teenage sons, they'll offer another option. They happen to use it a lot, much to my chagrin :)

    Blue
     
  11. Joliet Jake

    Joliet Jake Registered Member

    Joined:
    Mar 1, 2005
    Posts:
    911
    Location:
    Scotland
    What kind of nasties were the 'best' anti-trojans missing?
    Not asking for specifics but are they new, old, relatively insignificant or the most dangerous nasties?

    Must admit that 70 odd percent doesn't strike me as a terribly good score.

    I take it the anti virus software could catch some of the test files?
    It's surely possible to test an AV along side an AT to see which combination covered the most ground although I'd imagine it would be the mother of all tests.;)
     
  12. Gavin - DiamondCS

    Gavin - DiamondCS Former DCS Moderator

    Joined:
    Feb 10, 2002
    Posts:
    2,080
    Location:
    Perth, Western Australia
    IBK, I'm sorry if my posts appear to be "bashing", they are simply to defend a product which is made to look bad by the limited information revealed. Your post to me is more bashing than mine. Surely you don't say your test is IMperfect for AV's, but now PERFECT for AT's ? I do NOT discredit this test entirely, but you are taking an offensive position, not me.

    Do you deny that many of your samples (all?) are NOT compressed, even if when in the wild they were in a compressed state. I have seen collections and they seem to always be pre-processed to produce all unpacked files. To an AV engine it makes no difference, but to an AT it can, and to TH it does. We are not out to build an AV engine ! For the record it is not MY product anyway, but I still see it as wrong to have people thinking these tests are an accurate reflection of detection capability. Only those who know how the scanner works and what is in test sets know the full truth, WE don't know because you won't tell us or send a random group of samples.

    How many of your samples have you verified by hand not to be installers, corrupted etc ? Don't just come back and say 100% are ITW samples, you did NOT verify 150k samples one by one, I do not believe it :D

    How many of your samples would be picked up in the wild if we randomly selected 100,000 internet users and checked their machines ? A large % would never be seen ? As AV expert you can not deny this truth.

    That is all I need to know. I do not intend to bash the test, only to understand and explain why TrojanHunter could miss many samples.

    TrojanHunter aims at a specific detection niche, and to detect trojans in memory etc, instead of wasting detection on a LOT of not-ever-seen samples every other scanner detects anyway. If every scanner is just going to detect all the same stuff, we might as well cling to hope that a scanner can detect 100% of everything. That hope is long gone these days ;)

    For the record, your testing of ANTIVIRUS is good. AT is just supposed to be complent to AV protection not a direct competitor. Having said that, TrojanHunter will improve in these tests at a fast rate because we cannot ignore human nature - most of the public will think the tests are somewhat accurate, and some will assume it is 100% accurate !
     
  13. redwolfe_98

    redwolfe_98 Registered Member

    Joined:
    Feb 14, 2002
    Posts:
    584
    Location:
    South Carolina, USA
    i don't see where ewido, trojanhunter, or boclean were even tested, and i would not compare those programs with antivirus programs, anyway..

    you have to wonder too, why it is that people who do tests have samples that the various vendors don't have..

    i am not knocking av-comparatives..
     
  14. Acadia

    Acadia Registered Member

    Joined:
    Sep 8, 2002
    Posts:
    4,366
    Location:
    US
    Redwolfe, read the very latest test done just this past week, the AT, not the AV.

    Acadia
     
  15. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,891
    Location:
    Innsbruck (Austria)
    @Gavin:
    I think the test report is quite clear: the same test-sets (backddors, trojans and othernalware) used in February in the AV test were used in this AT test. What does this test show? It shows how many of the samples in this test-sets were detected on-demand by the various AT products. It is even noted in the report why the results may look low and why it makes sense to use the AT's in real-time instead of only for on-demand scans. You even admit that TH is detecting trojans mainly in memory and not in an on-demand scan. Well, other at products do also detect some more samples on-demand, and that is what the test shows. Like noted many times in the report, AT are good to use in addition to AV software, no matter if their on-demand scores are lower than AV ones. And btw, TH scored not that bad like you give the impression it did, some other AT products had lower scores.
     
  16. Gavin - DiamondCS

    Gavin - DiamondCS Former DCS Moderator

    Joined:
    Feb 10, 2002
    Posts:
    2,080
    Location:
    Perth, Western Australia
    I am aware it didn't score as expected :D thanks for the info.. and for your personal message. I am very interested to see how all goes in the future testing and do respect the work you guys have done over there a lot !
     
  17. Infinity

    Infinity Registered Member

    Joined:
    May 31, 2004
    Posts:
    2,651

    :thumb: :thumb: !!!!

    /edit: I'm practicaly 100% sure that your (TH's) memoryscanner is one of the better ones ...
     
    Last edited: Mar 27, 2006
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.