PC Doorguard

Discussion in 'other anti-trojan software' started by Main, May 3, 2003.

Thread Status:
Not open for further replies.
  1. xor

    xor Guest

    Firefighter, just read my post here - i am to lazy to write the same here again:

    ---> My DSLR Reply <---

    If you have any comments/questions lemme know here. :D

    Michael


    - Updated link to adjust for shift in post numbers at DSLR.
    - Also, here's a "direct link" by msg id if it shifts again - LWM
     
  2. Ph33r_

    Ph33r_ Guest

    Hey _anvil

    Not only that but It also depends whether the Trojan Signatures are “unique” Trojan Signatures or not, & how many actual Trojans are in the definitions once you subtract the worms.

    Hmm that Tests results looks to be the last bit from the pcflanks testing page, not exactly recent thing which makes that test irreverent in my book... And not enough Information giving like root mentioned, like whether they used only “Trojans” or did they include worms and if they used Trojan servers and Trojan clients or just Trojan clients. What was the Anti-Trojan Applications settings, and did they use just unpacked Trojans or? Many vectors to consider here…

    Doesn’t matter where you go most of those tests are flawed, and depending where the tests results are even. Like I could go to one of those products homepage and find Tests with a lot of other Anti-Trojan Systems for comparison, but it’s still flawed because they use all the Trojans which had been added to their particular Product which doesn’t necessary mean those other Anti-Trojan products going to have nearly the same in its Definitions List. So you go to another Anti-Trojan product site and view their Tests and you see that too passed all others.

    ohwell
     
  3. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To Xor from Firefighter!

    Thanks for the very interesting link! It was very long debate about that issue and sometimes too "hot" also! :D

    Because I'm quite simple minded, I couldn't assume everything there were told, so I have few questions left! o_O

    What do you think about those AT:s that couldn't unpack, are they at the end of their journey?

    Even those AV:s, which were not mentioned in the reference list (others than KAV engined av:s, F-Prot engined av:s, McAfee and RAV), were very often better in detecting trojans and backdoors as all AT:s. Is it only a financial problem, I mean lack of human resources?

    Or were those things mostly harmless to an average user, even though they were in those reference programs database named by trojans and backdoors? :eek:

    Here is a short list of AV:s that are very different in unpacking skills (according to Rokop av-tests this year) and finally there are their trojans and backdoors detecting rates in VirusP test 5 - 2003 (those that are out of that reference list)!

    Trojans and backdoors detecting rate; [% of 8 943 trojans and backdoors]

    1. 83.29% Norton 2003 - poor unpack
    2. 81.52% PC-cillin 2003 + a little bit unpack
    3. 80.28% Panda Titanium 2.0.4 ?
    4. 74.13% Avast 4 + a bit unpack
    5. 68.51% BullGuard 3.5 ?
    6. 68.00% DrWeb 4.29c ++ some unpack
    7. 66.58% BitDefender Pro ++ some unpack
    8. 59.70% NOD32 1.405 - poor unpack

    For me the unpacking engine isn't enough to say that an AV is good in detecting trojans and backdoors after this above! :cool:

    Why all these 8 were better than even TDS-3, although they were on the same line? I mean that those trojans and backdoors were picked from other programs (those 4 reference av:s) database? o_O


    "The truth is out there, but it hurts!"

    Best Regards,
    Firefighter!
     
  4. xor

    xor Guest

    NAV scores well on old, unmodified trojans. Because nobody does setup/change a 16 Bit Trojan Server today (and compress it with a runtime packer to avoid detection)

    PC Cillin is very strong with old dos type trojans.
    If PCC does score so high in Backdoor/Trojan Detection a lot of very old stuff was included in this testset. It sounds stupid but PCC does find even more old 16 Bit Dos/Windows 3.x Trojans than KAV. And PCC does have UPX Unpack.

    Panda is not so bad as it seems, they having at least basic unpack support for instance for UPX

    Avast has a very well working generic trojan detection. So this is not really (at least not for me) a suprise.

    Detects more clients and editservers - you can now draw the line why it scores so high.

    Good heuristic and unpack support.

    Bitdefender does improve very good the last time and they are including a lot of backdoors / Trojans

    Well... See BullGuard 3.5 - i guess in this testset was a lot of clients and editserver binaries.
     
  5. wizard

    wizard Registered Member

    Joined:
    Feb 9, 2002
    Posts:
    818
    Location:
    Europe - Germany - Duesseldorf
    Before you start judging and playing around with figures you might want to get first some background knowledge. Otherwise you mix up things and come to wrong conclusions.

    Testing with (backdoor-) trojans is difficult. First you have to consider the following questions: What is included in the testset. Problematic for a comparison are if the testset includes one of the following types:

    a) no trojans but harmless *.exe files that are distributed with the original server and remain on the trojan users machine. This files would never appear on a victim's computer. Therefore no real need to detect them. AV programs mostly detect these files as well as "trojans" while some AT programs do not detect them or just with an optinal scanning feature.

    b) old (DOS/Win3.1) trojans. All todays AT programs are focusing mainly on backdoor trojans. Also these programs appeard years after AVs on the scene where DOS and Win3.1 where already history. The only reason for todays ATs to detect those files is to score well in AT tests where the tester was to dumb to take them out.

    c) Modified trojans. Compared to a few years ago AV programs detect now also backdoor-trojans. The average time you can use a new (unmodified) backdoor server is now less than a week (I assume) before all major AVs detect them. Therefore trojan users seeked over the time for ways to make (backdoor-)trojans 'undectable'. The most commonly used method is to use runtime packer or crypter.

    So if you have in your AT testset samples as described in a) or b) than your whole test is crap and absolutely unuseable. If you want to consider the situation desribed in c) you need to test with two different approaches:

    To 'solve' the problem with the modified trojans there are two ways: Unpacking or Process Memory Scan. Both methods have their pro and cons but it turns out that AV programs prefer more the first approach and AT products more the second approach. But for the whole majority of all AV and AT programs none of these methods was taken and therefore these products are completly useless in regards of (backdoor-) trojan protection.

    Back to the testing procedures: If your product uses the first appraoch (unpacking) than the test is easy: simply scan all samples and see the results. To test products that use the second method you have to execute each sample to see wether they are detected or not.

    So coming back to the test and your conclusions: With more than >8000 samples (assuming all where equal) the testset must have included samples of type a) or b). The test did not considered c) because nobody executes >8000 samples.

    Hope that helps you a little bit to understand what the problems/backgrounds are with AT test and why most of them can be considered as useless and to understand why your quoted statement above is wrong.

    wizard
     
  6. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To Wizard from Firefighter!

    Thanks for a good advice! As I have mentioned, I don't understand just anything about antitrojans. So that memory scanning contra unpacking issue is just now somehow understood. At least I hope so!

    When I said before that, "for me the unpacking engine isn't enough to say that an AV is good in detecting trojans and backdoors after this above"! - I still want to add something to that.

    Maybe you understood me wrong, my purpose was to tell only that I supposed automaticly that a good unpacker AV is better in detecting trojans as the other "common" AV:s. That's because they have with excatly the same amount of trojan signatures so much more potential trojans in their database, as their are better to unpack than a common poor unpacker AV is capable to unpack!

    When we are looking at those last results against trojans, such quite good unpackers like DrWeb or BitDefender, were even worse against trojans than Norton, which is very poor unpacker! :D


    "The truth is out there, but it hurts!"

    Best Regards,
    Firefighter!
     
  7. controler

    controler Guest

    Let's try it this way once.

    There are many that believe that a trojan-virii-worm is not harmful un till it tries to execute. They believe that while the thing is still packed it is harmless. Then comes to theory of unpacking the dang thing in memory and that is why memory scanning is important.
    Yes I do believe in some ways the HOOK on execute is thew way to go since it doesn't use as many resources.
    Then I go run some tests on packed but material and none seem to catch it. What i end up doing is writing a letter to so and so and requesting the AV catch the file before it tries to execute. I always get the same answer back. While it is in a harmless EXE format, the file is not going to hurt you. I again write back and memtion that even if the file is harmless at this point, it could still be accidently run and it sittl exists and therefore is still one more file on the internet that should not be there.
    Sorry for my babbling. only had one cup a coffee this morning LOL
     
  8. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To everyone from Firefighter!

    Hi again! I'm terrible sorry about that, I have forgot the winner, Digital Patrol 4, from that VirusP 5-2003 AT-test test results!

    Now the VirusP AT-trojans and backdoors detecting table on previous page is corrected! o_O

    Best Regards,
    Firefighter!
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.