Home few antivirus comparaison

Discussion in 'other anti-virus software' started by Cadoul, Apr 11, 2005.

Thread Status:
Not open for further replies.
  1. Cadoul

    Cadoul Guest

    From France, i tried to compare with that base few antivirus programs.
    I do it for my pleasure. All updates are done the 04/11/05.
    Base virus:23500
    12% trojans-5% malwares-2% generators-9% DOS viruses-
    58% Win viruses-14% worms.
    Virusscan Pro 7.03----------------21617 detected
    Virusscan Pro 8.02----------------21533
    Virusscan Pro 9-------------------21533
    Virusscan Enterprise 8.0i----------22414
    Norton Antivirus 2003-------------21132
    Norton Antivirus 2004-------------21350
    Norton Antivirus 2005-------------21361
    Symantec Corp 9.03.1000---------21138
    BitDefender Pro 8-----------------21977
    F-Secure client Security 5.55 SR1-23194
    PC-Cillin Inetrnet Security 12------21545
    Kaspersky Pro 4.5.0.104-----------23417
    Kaspersky Pro 5.0.20--------------23400
    Kaspersky Prototype 6-------------22791

    I got trouble with NOD32 2.13 and 2.5 beta (none sense results).
    Sincerely.
     
  2. Happy Bytes

    Happy Bytes Guest

    At the weekend i'm going to make a big crashtest with 15
    different types of cars. I do it for my pleasure.
    I'm not really experienced with it, but at least it will be fun and
    i can publish some results.

    First i will drive with a constant speed of 12 mph against a wall.
    All cars which are still alive after that will go into the 2nd round,
    where i'll try to hit a few water hydrants by driving backwards with full
    speed.

    The car who nails the waterhydrant down without losing its Bumper wins
    this test.

    8^) hb. :D
     
  3. Infinity

    Infinity Registered Member

    Joined:
    May 31, 2004
    Posts:
    2,651
    @ happy bytes: let us know who wins ;)
     
  4. Cadoul

    Cadoul Guest

    Sorry Happy Bytes but i just want to test some products. Nothing else.
     
  5. Happy Bytes

    Happy Bytes Guest

  6. richrf

    richrf Registered Member

    Joined:
    Dec 11, 2003
    Posts:
    1,907
    The KAV results are certainly surprising. I have heard in other places that the 5.0 engine is not as strong as the 4.5, but I never quite bought into it. Here are some more results to confirm. Very strange. I hope someone from Kaspersky comments.

    Rich
     
  7. gerardwil

    gerardwil Registered Member

    Joined:
    Jan 17, 2004
    Posts:
    4,748
    Location:
    EU
    Happy Bytes,

    This guy/gal Cadoul probably did a lot of homework and for sure he knows to post in the right forum which you don't because your joke :( belongs in the 10F section.
    Regards,

    Gerard
     
  8. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    HappyBytes is just explaining to all that for making a valid test which is made public to many peoples that believe in what they see, it is not enough to just run an AV against some xx-tousands samples. There much more work and experience needed to perform valid AV tests. Read more at: http://www.people.frisk-software.com/~bontchev/papers/virlib.html
     
  9. rothko

    rothko Registered Member

    Joined:
    Jan 12, 2005
    Posts:
    579
    Location:
    UK
    Hi, I'd like to know what is meant by nonesense results? what exactly did NOD32 do/not do?
     
    Last edited: Apr 11, 2005
  10. Cadoul

    Cadoul Registered Member

    Joined:
    Apr 11, 2005
    Posts:
    76
    Location:
    France
    NOD32 2.5 beta discovers only 12736 viruses.
    NOd32 2 stop at half of scanning process. I tried twice.
    I suppose i make a mistake.
    I'll try ASAP.
     
  11. dan_maran

    dan_maran Registered Member

    Joined:
    Aug 30, 2004
    Posts:
    1,053
    Location:
    98031
    I am not trying to be critical about this test, but the DOS virii are really not relevant anymore IMO. But I do appreciate the results, as it is just another view of how AV's stack up. Yes, we all know that no AV test is perfect but it helps give a little perspective.
     
  12. rothko

    rothko Registered Member

    Joined:
    Jan 12, 2005
    Posts:
    579
    Location:
    UK
    hi Cadoul, if you do try NOD32 again just check that the settings in the nod32 scanner are set to their maximum settings, ie advanced heuristics is selected as well as scanning archives, etc.

    With version 2 as they arent set very high by default - there is a sticky thread from Blackspear over in the 'NOD32 Version 2' forum that can help with this.

    Please post back the results.

    thanks, lee
     
  13. bellgamin

    bellgamin Registered Member

    Joined:
    Aug 1, 2002
    Posts:
    8,102
    Location:
    Hawaii
    I for one enjoy reading little test snippets like this. Plus, I feel quite certain that the denizens of Wilders are quite capable of recognizing that it is neither professional nor statistically pure.

    I just wish that the eset mod & the self-professed "av expert" would cease posting arrogant & insulting barbs every time anyone provides information or expresses an opinion about various security programs. Their blatant contempt for the opinions & efforts of other posters is extremely distressful. :doubt:

    I appreciate Firefighter's efforts, & Cadoul's, as well.
     
  14. NAMOR

    NAMOR Registered Member

    Joined:
    May 19, 2004
    Posts:
    1,530
    Location:
    St. Louis, MO
    Happy Bytes has a razor sharp touge. :eek: :p :D
     
  15. Sweetie(*)(*)

    Sweetie(*)(*) Registered Member

    Joined:
    Aug 10, 2004
    Posts:
    419
    Location:
    Venus

    Extremely well said.....I've tried to say similar things in the past only to be chastised by the moderators/admin here, who seem to have no respect for a professional opinion from a female.
     
  16. f43234

    f43234 Guest

    Can you post few screen shots with samples detected by antivirus softwares just to let us know that this is not a joke?

    Thanks
     
  17. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    Maybe you just missed the point. The way how some Eset mods are posting arrogant & insulting barbs against someone, is actually turning against the attacker by rewarding the target itself. The more arrogant insults, the less trustworthy impression of the company in generally. It's a shame, because Eset is still an excellent av-vendor.

    Best regards,
    Firefighter!
     
    Last edited: Apr 12, 2005
  18. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    @bellgamin: I think I did not posted anything arrogant or insulting; I just gave my opinion together with useful link. Do you see anything wrong in what I wrote? I think I expressed often in past that every test (even if the worse ones) shows something if used in relation to others tests and if the reader is capable to interprete them correctly. Also I try to help others a bit at least to improve a bit the quality of the test-set by telling them what to remove.
    Like I have to accept your opinion about some tests, I think you also have to accept my opinion and posts. We are here all just peoples and readers of this forum and free to express our opinions; the "AV expert" under my nickname was not written by me, it was Paul Wilders probably.
    P.S.: I think that your posts are more arrogant than mine.
     
  19. Cadoul

    Cadoul Registered Member

    Joined:
    Apr 11, 2005
    Posts:
    76
    Location:
    France
    New results.
    Updates:04/11/05.
    Max setting.
    NOD32 2.13-------------------------22575
    NOD32 2.5 beta---------------------22736
    F-Secure 6.00.570 beta-------------23191

    A screen to prove is not a joke.



    [​IMG]
     
  20. dvk01

    dvk01 Global Moderator

    Joined:
    Oct 9, 2003
    Posts:
    3,131
    Location:
    Loughton, Essex. UK
    I am only an amateur at this as well

    I am reasonably knowledgeable at cleaning many malwares but I wouldn't be able to run an efficient reliable comparison test at home

    I see many reports that xxx AV found more than BBB AV so it's better but that is very hard to quantify for a non professional tester

    I have a selection of malwares that have been culled from different infected computers and I could very easily run numerous antiviruses on them and see what is and isn't detected

    That doesn't mean that any one of them is better than the other one and I know that many of my samples are duplicates/triplicates or whatever, some are zipped and some are unzipped

    Many Anti viruses don't detect spywares and adwares even though they cause probably a lot more of the problems we see nowadays rather than the traditional Virus which in the wild now seem to be fairly infrequent compared to trojans/worms & spywares

    Detection of a sample is one thing but the removal & cleaning is quite another and I don't know of any "Home user" who is prepared to or has the time to infect his system with all the samples see if the AV detects and cleans the running version of them and reppairs any system settings damage that is done

    then to make it fair, wipe out the computer completely and install a different AV and do the same thing again and keep on doing it untill all the AV's are tested

    Just scanning a disc or folder containing a bunch of samples doesn't tell me anything as it is very possible that until the virus/trojan/worm is active an AV can't detect it in it's dormant state as it might well have a different signature code to a running version, that is probably why so many people get infected despite running antiviruses that appear to have a good detection rate from your list
     
  21. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    If I may show your results more clearly, so that's how they look like now.

    Base virus: 23500
    12% trojans-5% malwares-2% generators-9% DOS viruses-58% Win viruses-14% worms.

    Kaspersky Pro 4.5.0.104------------23417 -- 99.6 % detected
    Kaspersky Pro 5.0.20---------------23400 -- 99.6 %
    F-Secure client Security 5.55 SR1---23194 -- 98.7 %
    F-Secure 6.00.570 beta------------23191 -- 98.7 %
    Kaspersky Prototype 6--------------22791 -- 97.0 %
    NOD32 2.5 beta--------------------22736 -- 96.7 %
    NOD32 2.13------------------------22575 -- 96.1 %
    Virusscan Enterprise 8.0i------------22414 -- 95.4 %
    BitDefender Pro 8-------------------21977 -- 93.5 %
    Virusscan Pro 7.03------------------21617 -- 92.0 %
    PC-Cillin Inetrnet Security 12--------21545 -- 91.7 %
    Virusscan Pro 8.02------------------21533 -- 91.6 %
    Virusscan Pro 9---------------------21533 -- 91.6 %
    Norton Antivirus 2005---------------21361 -- 90.9 %
    Norton Antivirus 2004---------------21350 -- 90.9 %
    Symantec Corp 9.03.1000-----------21138 -- 89.9 %
    Norton Antivirus 2003---------------21132 -- 89.9 %

    Best regards,
    Firefighter!
     
  22. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    First off, I'd ask all posters to refrain from making personal comments, all they do is detract from the discussion and pull the thread off topic.

    Second, an analogy has been made regarding AV testing and any other type of testing once could envision, the worthiness of a car in crash testing being the specific example employed. If everyone could dispense with the emotional content for a time, the analogy has a significant amount of merit. Driving cars into objects is not the same as crash testing. In both cases you end up with dented cars. In the former case there are simply too many uncontrolled factors to allow you to figure out what the dents really mean. Performing controlled crashes is a much more complicated job. Speeds and approach vectors have to be controlled, objects to be struck have to have well defined and constant properties, and you have to have a protocol to quantify what the dents mean. AV testing may seem much simpler, all you need is a file and an AV program, right? No, not really, it's not simpler. The same adherence to controlled states for the challenge and controlled analysis of the results do apply.

    What that means in AV testing is that the samples have been unambiguously validated to be functional. You have to know what the file in front of you is - how it was put together, whether it's an archive or not, whether there are a number of identical files posing as distinct entries in your testbed, whether the file is operative on the platform being tested, and so on. The vetting process is extensive and well described in the link provided by IBK. Before the testing is even started, the first question a reader should be asking is whether the test bed has been has been validated in it's entirety. I am fairly capable and I would not even consider undertaking the validation of a test bed of a size worthy of examination, it is too much work for me.

    Assuming the entire testbed has been validated, then there is the question of settings to employ in each of the AV's. Obviously that can have an enormous impact. Even if you assume everything has been done correctly to this point, there is the overriding question of what the results mean, what levels of difference are statisitically significant given that a finite subset has been examined. This topic has been discussed both informally and in a more formal context. If a tester is going to provide either a rank ordering or numerical results reflecting pass rates, they need to understand which differences are statistically different and which are not. A reader needs to know whether the differences quoted in this thread are statistically significant. I have no idea of the statistical significance, if any, of the results quoted above. Is a 96% identification level statististically different than a 99% level for the tests run? I have no idea. Is the range of results which are statistically equivalent larger or smaller? Again, I have no idea. Finally, quoting numbers to a level of numerical precision implies a level of confidence in the results. I'd say that the results in this thread are known to 1 significant figure at best. I wouldn't push them further than that.

    That's my short take on things and that doesn't even start to touch on some of the more pragmatic questions raised by dvk01 and others elsewhere.

    Blue
     
  23. jmschwartz

    jmschwartz Guest

    Hello,

    As a frequent visitor and not-so-frequent contributor, I enjoy personal AV comparisons and opinions. The healthy, "competitive banter" (ala ESPN's Around the Horn) makes for good reading and, occasionally, food for thought.

    Having said that, the analogy between AV testing and crash testing cars is hardly apropos (no matter how rationalized), since this is an AV forum--not an automobile test site. Let's continue to view posts from the "little guys" who try to "crash test" AV programs.

    Regards,
     
  24. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    Ahem, pardon me for interrupting this thread, and I probably dont know what I'm talking about but here's what I figure:

    A test made by a single person is a good indicator of how an AV performs for that user. The results of one person's tests may or may not apply to another. Every person has different habits and testers have different sample sets, and one cannot simply say that it is an indicator of real world performance.

    And lets not forget the fact thast many of those samples *could* be just junk files which are identified as malware by different AVs.

    There are many cases where a simple version difference can make a noticeable difference in performance. Take PCC for example, the pre-2004 versions dont have spyware detection, hence they miss out on a lot of spyware.

    The professional tests are made according to the WildList due to which it can be considered as a reliable test for many users in that time period when the test was made.

    I'm not saying that personal tests should not be done, I'm only saying that it may really not be the case for other users, as to what the personal test reported.

    Just my bit.

    Regards,
    Firecat :)
     
  25. Ianb

    Ianb Registered Member

    Joined:
    Nov 26, 2004
    Posts:
    232
    Location:
    UK
    Thanks for posting the results Cadoul.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.