Anti-Trojan tests

Discussion in 'other anti-trojan software' started by toploader, Aug 23, 2005.

Thread Status:
Not open for further replies.
  1. toploader

    toploader Registered Member

    Joined:
    Aug 19, 2005
    Posts:
    707
  2. Starrob

    Starrob Registered Member

    Joined:
    Apr 14, 2004
    Posts:
    493
    One of the programs in that test has been discontinued. TDS-3 is gone and the test is a year old and does not even include Trojanhunter, BoClean, A2, or Ewido.....

    I have yet to see any trojan tests that truly compares the performance both in on demand and in memory.

    The closest I have seen to doing it is nautilus but he/she only does on-demand testing (that I have seen so far). To do a test where a large majority could agree the testing was done fairly might require too much time and effort for most testing individuals/organizations..

    It would require a testing regimen that eliminated the bias of the person doing the testing. It would also require a "exact" agreed upon definition of what a trojan is and what it isn't. I would get tired if I typed all of the things that would be needed to make the test "fair" to all parties....so I will end with just those two specifications of what would be necesarry....

    I think it would be really tough to set up a testing environment that is really fair. My hats are off to those who try.

    Virtually every AV and AT test that I have seen gets disputed as to the results and what they mean. I think it takes up so much time and effort to take into consideration all of the things that could cause bias and error that I don't really consider any test I have seen so far as "accurate".....and even if a test could be designed that is "fair" to all vendors involved....it might be so different from any end user "real" life results.

    All of the tests that I have seen so far are only a very rough guide as to whatever is being tested. Some of guides are rougher than others and most tests are virtually useless because all tests are looking back in time. Really what difference does it make about what a application can detect from a year or so ago? What relevance does it have for what it can detect today? Most applications have changed to another version or engine over a years time.

    Even a test that is a month old can only tell you what a application was capable of detecting using those particular testing methods a month ago. A month in the security industry can be a lifetime. Some applications make vast improvements over a month.....

    These are my reasons why I only consider tests a extremely rough guide.....I take them with a grain of salt.



    Starrob
     
    Last edited: Aug 23, 2005
  3. toploader

    toploader Registered Member

    Joined:
    Aug 19, 2005
    Posts:
    707
    i agree Starrob, one should always use plenty of salt when it comes to tests.

    i think the more independent testers the better, if you got 4 or 5 independent testers then you can get a flavour of who seems to be the most consistant in the pack.

    no test is infallible -like you say it's a moving ballgame.

    at the end of the day i read all the tests i can to try and build up a picture of what's out there and how good it is - same reason i browse these forums - to learn about the experiences that others have had with their software. the pros and cons - things to watch out for etc

    hopefully over time some products will stand out from the rest.

    yes the tests are a bit old - i don't know if he has done any new ones?
     
  4. ?i?i?i?i?

    ?i?i?i?i? Guest

    @Starrob

    "The closest I have seen to doing it is nautilus but he/she only does on-demand testing (that I have seen so far)."

    We regular execute malware samples and perform on access tests. In connection with the BOClean report we executed about 500 malware samples. Several times. On several different computers. It was truly painful to click click...wait...allow BOC to kill and remove a malware sample...wait...click click...start the next sample ... and so on. That's why we only perform spot checks and do not generally execute the entire test archive.
     
  5. ?i?i?i?i?i?

    ?i?i?i?i?i? Guest

    Addendum:

    Frequently, we execute SPECIFIC sample in order to test the effectiveness of a memory scanner. For instance, I know that Armadillo and Obsidium also encrypt a trojan running in memory. That's I will always check whether a memory scanner can handle these protectors. Another example is Flux. I did not perform the infamous Flux tests in order to promote any particular scanner but because I had the feeling that ordinary process or module memory scanners would not detect injected code (not: injected modules).
     
  6. Starrob

    Starrob Registered Member

    Joined:
    Apr 14, 2004
    Posts:
    493

    I like your tests because you limit it to narrower conclusions than most. You outline strenghts and weaknesses of the software you are testing. Sometimes people disagree with your methodology or conclusions but the tests you perform are about as good as you can get with a limited amount of time.

    Most tests only list how many malware is detected and missed. Sometimes, I like to know why things were missed. Some times the reason why things were missed conflicts with the marketing hype and this I like to hear.

    Actually, the biggest weakness you have is that some people think you are too close to one developer but I really doubt anyone is going to come out with a test where no one accuses them of bias.



    Starrob
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.