Question about Antivirus tests

Discussion in 'other anti-virus software' started by Jan259, Jun 3, 2004.

Thread Status:
Not open for further replies.
  1. Jan259

    Jan259 Guest

    Hi all,

    I would like to know that the independent antivirus testers such as Virus Bulletin, ICSA Labs and a very interesting new comer AV-Comparatives : http://www.av-comparatives.org, do these testers verify a viruses/worms/trojan/malware by run them on an appropriate operating system to see if they are a live malware (not corrupted files)? If they do, WOW!!! how they work with ten-thousand malware?

    Because when it comes to professional research we need a good material to represent a good and undisputed results. I'm a NOD32 user and I interest in NOD32's scores in Virus Bulletin and wonder how this small and fast antivirus can beat all big name like Norton, McAfee, Trend Micro, Kaspersky.

    Your answers are greatly appreciated.
    Thanks
     
  2. Arin

    Arin Registered Member

    Joined:
    May 1, 2004
    Posts:
    997
    Location:
    India
    welcome Jan259, why don't you register and get a free teddy bear? no they don't run those viruses to test them if they are live or not.

    yes NOD32 is a very good antivirus with a weird name. go through the topics and you'll find a good deal about the real deals of antivirus softwares.
     
  3. tazdevl

    tazdevl Registered Member

    Joined:
    May 17, 2004
    Posts:
    837
    Location:
    AZ, USA
    I'd suggest reading the reviews a bit more closely, especially on av-comparatives. I don't see them at the top of the pack in those tests.

    rokop.de also has been doing some great tests.

    Several Product Comparison
    http://www.rokop-security.de/main/article.php?sid=693

    NOD32
    http://www.rokop-security.de/main/article.php?sid=718
     
  4. Herman92

    Herman92 Guest

    Err, NOD32 doesn't beat them, and looking at the actually individual test data, i'd say scoring 100% on VB is scoring 100% on VB. Keeping notice of the track record does nothing past say the last year or so, since products change drastically.

    Secondly, VB is checking viruses, and only viruses. So scoring 100% on that, while a good thing, isn't the most important thing, and theres products that certainly score vastly higher than NOD32 in the other - and as some thing, more important catagories.

    Regards
     
  5. optigrab

    optigrab Registered Member

    Joined:
    Nov 6, 2002
    Posts:
    624
    Location:
    Brooklyn/NYC USA
    I was just about to post the same comment...

    Agreed! I've made this observation in the past...

    Certainly some feel this is true, but perhaps others feel that ITW virus malware are the most important, while others feel there are some characteristics other than detection that are worth considering.

    Personally, I feel NOD32 does very, very well on AV-Comparitives, Rokop and Virus Bulletin (of course). Getting back to Jan259's post, the supposition about beating "all big name competetion" is misleading and fairly misses the point.
     
  6. Sandish

    Sandish Registered Member

    Joined:
    Apr 29, 2004
    Posts:
    51
    darn, when will i get mine?
    :rolleyes:
     
  7. Jan259

    Jan259 Guest

    WOW!!! Thanks you guys for an answers.

    The reason that makes me curious and have to ask you all because of this topic

    Snakeoil or not : https://www.wilderssecurity.com/showthread.php?t=6463&highlight=virusP

    This topic, especially on Mr.rodzilla's replies makes me curious about the quality of every abtivirus tests and good to know that Virus Bulletin verifies a virus sample.

    But no offend indded, How quality of the following test if they not verify their malware sample.

    1. Rockop
    2. AV-Comparatives
    3. VirusP

    Where they get a sample, by download form VX site or any way else?

    Your answers are greatly appreciated.
    Thanks.
     
  8. Bryce James

    Bryce James Guest

    Realworld experiance to me is the most important thing, and in my real world, NOD32 consistantly missed malware and trojans to the extent I had to remove it from my PC and move on to better products.

    Most of these so-called test sites need some lessons in testing methodology. Ironic that few test packed, rebased, and heuristically masked baddies. The tests i've seen on those should be alarming to some due to complete lack of performance of some AV products.

    Basically though, you'll find a bunch of people on this forum shaking their head in a "yes" motion when NOD32 does well. When it completely bombs a test, you'll find those same people critsizing the tester, his methods, and downright slandering some poor folk. Some even go so far as to say "It failed on purpose, the test was flawed!"..

    /shakes head
     
  9. optigrab

    optigrab Registered Member

    Joined:
    Nov 6, 2002
    Posts:
    624
    Location:
    Brooklyn/NYC USA
    Bryce,

    With all due respect, how do you reconcile these two statements?
    So when, in your opinion, is dismissing an AV test called for and when is it willful ignorance, given that we're all not experts in statistical testing methodology ?

    Clearly, one must take the results of new (typically unexplained) comparatives with a grain of salt. OTOH, if Rokop, VB100, and AV-comparatives seem to be generally well regarded around here, and NOD32 does very well in these tests and has served me well (particularly so when one acknowledges that detection is a major attribute but not necessarily the only one), shouldn't I give that more credence than any individual(s) who report that NOD32 failed on their machine or in their homemade tests?

    Cordially,
    Optigrab
     
  10. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    I'm eager to know if there are some other testers than VirusBulletin which are verifying all their tested samples as really infected who are also publishing their test results like VB does, not only like "Passed" or "Failed", but also detected and missed ones.

    When VB has a long history behind and it is testing several times per year, it is quite simple with VB's resources to verify some 40...65 NEW "In the Wild" viruses per each test and about 1 NEW polymorphic virus a year. How many NEW macro and standard viruses they have tested before each test, I don't know, only that there are some 1600 different virus names in VB's "in the Zoo test" (= macro, polymorphic and standard tests).

    AV-test.org, with their over 73 000 different samples, VirusP, with his over 58 000 different samples and AV-Comparatives.org, with their over 44 000 (?) different samples, have enormous difficulties to test a sample by sample each file they have in their testbed as really infected, so they have to believe several AV's positive detections. Can someone imagine how much time does it take to test for example 73 000 samples one after an other as really infected with a virus, with a trojan or was that only false alarm? That's the key point when VB believers are saying those other tests but VB as rubbish, they already know that it is impossible IN REAL LIFE to test 100% fullproof those tested 40 000...75 000 samples real infected. But does it matter so much, when even VirusBulletin in their false alarm tests with 20 000 clean files has proved that there are clearly less than ten false alarms in every "false alarm" missed 100 % VB Award product's testreport
    (= possibility less than every 2000 detection a false alarm)?

    I agree also that those other tests than VB don't have the same reliability than VB has concerning proved viruses, but because the world isn't only black and white, those other tests than VB may have important add value to estimate how good some av really is against all kind of nasties. An other thing is that in VB, you actually know beforehand (most) of those tested NEW ITW viruses, because that ITW list is an open source.

    http://www.wildlist.org/WildList/Real-Time.htm

    At school, when you know the questions before the test, we called that cheating.

    After all, it is up to you how much effort you are releasing from the full protection resources to succeed in VB tests, because av-producers do have an agreement against ITW threats, that those av-producers will be informed about new worldwide threats immediately when such have been detected.

    Best regards,
    Firefighter!
     
    Last edited: Jun 4, 2004
  11. Plank

    Plank Guest

    Thanks Firefighter, further proving that VB's tests seem to be a bad joke. I never put much credibility in them anyway, they seemed rather limited in true threat scope.

    Its funny how some companies beat the VB drum nonstop, and you send them a single question, and get back 5 lines about "How we have never failed a VB test".. Umm, like I really care? Personally, I don't think its much of a selling point at all, and if you want a selling point, lets see how well you do against packed and rebeased worms and trojans - the real threats out there.

    Ironically, Eugene Kaspersky was right when he said NOD32 is focused and tuned on one thing and one thing only, winning VB tests. Obviously if they have the answers before the test, then its a no-brainer to win it, right?

    So we can surmise, VB awards are nothing more than a show of how efficient a company is at adding MD5 definition checks into their databases. Nothing more, nothing less, and as such, i'd put that test down there at the Eicar level in terms of using it as a guide for what AV you should purchase.
     
  12. optigrab

    optigrab Registered Member

    Joined:
    Nov 6, 2002
    Posts:
    624
    Location:
    Brooklyn/NYC USA
    Is that what Firefighter was saying?! I must admit I did find his post a little difficult to follow (through my own ingnorance I'm sure, not his fault), but I thought he was defending several other testers - yet not entirely at the expense of VB's credibility:

    Makes sense to me. OTOH, for some reason many don't see VB and "other testers" as two legitimate, albeit different, animals. Why must VB's credibility and other testers's credibilities be mutually exclusive?

    On a different note, I'm wondering (again, through my own ignorance) why this seems so damning to VB's critics:

    I suppose this is only an analogy, and I thank Firefighter for using an anaolgy to help others like me understand, but I find this analogy confusing. If you know the questions before hand, and the other students also know the questions, and the teacher knows this too, I'm not sure it is cheating. Another analogy might be a driving test; you know the instructor will ask you to parallel-park and merge in traffic, but it is still a challenge to pass anyway. I may just be illustrating my confusion here.

    Respectfully,
    Optigrab
     
  13. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To optigrab from Firefighter!

    In my mind u still missed the point, farely speaking. I wrote:

    > After all, it is up to you how much effort you are releasing from the full protection resources to succeed in VB tests, because av-producers do have an agreement against ITW threats, that those av-producers will be informed about new worldwide threats immediately when such have been detected.

    More clearly, in business, there is unbelievable competition all the time. By decreasing your limited resources to a job, that in business terms is vaste, like those Japanese say, "MUDA",

    http://www.aafp.org/fpm/990300fm/23.html

    non productive effort, that always is out of your value adding actions, doesn't make your product any better.

    When I wrote that there is an agreement between most reputable av-companies, that they'll get every real ITW viruses very fast after some has detected them, why bother to develop the wheel again? How often Kaspersky has really missed those ITW viruses? Those false positives are the main reason why KAV have missed some 100 % Awards. I don't care so much when some av has missed one file from 20 000 clean ones. It's more important that they are focusing against every possible infection there is walking around. If they were viruses or trojans, it doesn't have any difference.

    Best regards,
    Firefighter!
     
    Last edited: Jun 4, 2004
  14. Yealla

    Yealla Guest

    Yea and this is why most knowledgable people just go with KAV or anything that uses the KAV engine/defs. Why bother with anything else? Re-inventing the wheel is a good analogy really, it seems all these other AV's are basically attempting to create their own marketshare in a big market and are falling short.

    It defies my explanation why people would choose a product like KAV or KAV-Engined systems with something else, that is proven to miss say 50-60% of non-virus threats. Then to say "Use layers" and buy a product for AV, another for AT, and still another for Malwares, because "Its safer, and why keep your eggs in one basket" is beyond explanation.. Thats like saying I should have 5 keyboards connected to my PC incase one breaks!

    Seems to me they are just weaker alternatives, and i'm not KAV fanboy, I just feel they got it right to start, so why bother with the rest?
     
  15. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    I'm not saying that this situation stains. If someone understands that every obstacle is actually a door to big step forward, by accepting that, not by denying every possible critics, you may have valuable keys to resolve those barriers that are dissociating you from the absolute top!

    Best regards,
    Firefighter!
     
  16. Arin

    Arin Registered Member

    Joined:
    May 1, 2004
    Posts:
    997
    Location:
    India
    dear Sandish, i think you should ask Paul, he gave me a free goody. my first post was removed to another forum, it was not fully justified. anyway my tears are dry now.

    dear Jan259, the samples comes from all sorts of sources. they exchange their samples too. sometimes they download from VX sites and sometimes a writer sends them viruses. but the major portion comes from the users who submits viruses.

    dear Bryce i also think that NOD32 should expand its database like Kaspersky. with AH and a big database NOD32 will give the big names a good run for their money. but do not forget that the *REAL* world means one thing to a business manwho is checking his email and another thing to his teenager kid who is watching porn and downloading pirated softwares.

    dear Plank, please explain what you mean by this "So we can surmise, VB awards are nothing more than a show of how efficient a company is at adding MD5 definition checks into their databases". specially that 'MD5 defination check' part. you are obviously dancing in the dark.

    dear Firefighter, don't confuse school tests with VB tests. Optigrab was very nice in his explanation. but i agree with you in one point, detecting ITW isn't enough. but let me explain a few things here. there is a Nachi variant which copies your TFTP Daemon to WINS folder and renames it to SVCHOST. you scan your system with some AVs like Panda and it clears out the Nachi part. now you scan your system with McAfee and it detects that SVCHOST and deletes it. now you'll see that there is more to it than just detection. proper cleaning is also necessary and there lies the real quality. don't take my example too seriously and argue that SVCHOST wasn't really harmful, my point is totally removing all traces. almost all the AVs detects some well-known viruses but when you clean them you'll find them kinda different. so if two AVs passes the VB test don't put them into similar catagory.

    dear Yealla, i like KAV for its huge database. i'm truly a fan of Kaspersky. but Layered Security does make some sense. first let me tell that if you compare Kaspersky's database with another AV what makes you think Kaspersky will totally cover the other database? Kaspersky's database is huge but it doesn't mean its the superset of all other databases.
     
  17. Firefighter

    Firefighter Registered Member

    Joined:
    Oct 28, 2002
    Posts:
    1,670
    Location:
    Finland
    To AMRX from Firefighter!

    No one is perfect. Please, look at my link,

    http://www.checkvir.com/index.php?CN=3.3&CIE=0

    almost every reputable av was within this test (April 2004) against infections cleaning , McAfee and Norton were out.

    But which were the winners, eTrust v7 and TrendMicro. Quite confusing?

    In the REAL world, there is no chance, that winner takes it all.

    Best regards,
    Firefighter!
     
  18. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    We do verify the samples where possible. We had just one problem in February where a mistake did happen and so badsamples were mixed in the regular database. We fixed the results and the results that were then online at the official date (1st March) are the fixed ones. After that date, we continuosly checked if still some of the bad samples were present in the regular databases and to see if the results are good - we determined that for sure not more than ~250 samples of 300.000 could be considered to be better removed from the regular databases (also if some of they "could" do harm), and that those does not influence in any way the results or rankings. Anyway there is no big test-set that is absolutly perfect. We do our best, anyway an even higher quality would require to do this as full time job for many peoples. As all involved persons are students/friends and some IT professionals that do this not as main job and for free, we will never say that our test-sets are perfect, but they are of good quality.
    We do get the samples from AV companies and from users submissions (small and big ones). We check more deeper the users collections/submissions, as they contain usually more garbage. So far we have over 120.000 samples that we prefer to do not include in the regular databases, cause they are just garbage, corrupted/intended samples, components, objects and source codes, etc. We also do not include in our regular test-sets samples like constructors, virus/hacker tools, adware, commercial spyware/backdoors, dangerous tools/applications, jokes, simulators, etc. as most companies would not like to see their product tested against a test-bed containing such files. Because some of those samples were sorted out using automated tools, of course it is possible that we excluded also some real viruses/malware but just some.
    Anyway, no matter how good test-beds are, some AV companies (or better some individual persons from AV companies) will always say the test-bed, the test-procedure etc. is bad; but after asking back why we do get e.g. as answer that the person saying this did not looked on the samples provided to them for review, so that person can not really say that the used samples are bad. Also the reason why the test-bed is bad is not really good, as the reason is that their product was not scored so high as they expected etc. and telling us how to change our procedures to make their product scoring better (in an indirect way). As we want to stay independent, we do accept critics and how to improve our tests, and if we have the same opinion and think that the ideas are good, we will apply changes; but we are not going to change our test procedures just to make scoring some companies higher cause they think they must be score high in all tests and also we do not want to just deliver the same results and conclusions as other testers do. My opinion is, that users should look on various tests and compare them together. Just looking on 1 test does not say much, as all tests could use different test procedures, different samples, etc. and if for example one product is on the top in all different tests, it is quite probably that the product is really a good product.
    Strange thing is, that a company that did really not scored so well in our tests is really happy about the test we did, as it does show them what they have to improve. This shows that some companies does understand how to interprete the results correctly and that they does not use the results just for publicity etc. As I always say, all the products that are tested are already a selection of "very good antivirus scanners". The tested products do detect over 85%/90% of zoo-samples and that is quite much (if I remember correct, at icsalabs detection of 90% is needed to get certified). And it does not really matter if a product does detect 90,15% or 90,17% in total, but it is anyway interesting to see the results (at least for me). I would recommend the use of any of the tested scanners, but when you are going to choose the scanner, you should NOT rely just on the detection performance of zoo-samples, but also on others very important features etc. that does fit to your requirements. There are also some other good scanners out there (and also some bad scanners), but we can not test all products out there, as our time is limited and testing more products does mean spending more money.
    We also do provide to AV companies the logs of the tests for review, but we do not provide the logs to customers (for some undefined "security reason"). We also give to the AV companies some weeks prior the results goes online the possibility to give their comments about the results to use or to notify us if they think we made some errors in the test. We also do ask them if they want to be tested or to say us if they would prefer to do not get tested by us (even if they usually can not really decide about that) and so far all are happy to get tested (also the company that says our test-beds are bad did not say that they do prefer to do not be tested in future).
    I read antivirus tests in past like you (customers) do now read test results; I also had always my own opinion about tests and reviews and that is way I think you should never rely just on one test but looking on more tests and than build an opinion about it. In the links sections of our website you can see some other well known and good test centers etc.
    I hope I answered all your questions and curiosity clear even if I am quite tired at this time [errors are possible] ;-)
     
    Last edited: Jun 6, 2004
  19. AVK Fan

    AVK Fan Guest

    The lack of desire of testing houses to test a wider range of products is a bit bothersome to me as well. For example, theres two versions of AVK out there, GDATA and eXpendia AVK, one is licensed and produced for the US, the other for Germany. The US version gets almost ZERO testing, while the German one gets infrequent testing - but its common knowledge that the US AVK is a better product due to the RAV+KAV combo over the KAV+Bit combo.

    The tests I have seen with this product, it easily sweeps the number one slots, but most of those tests seem a bit older. AVK-Germany is tested by VB I noticed, and scores 100%, but they've never tested the US version. Which is difficult to understand, especially in light of the US version starting to gain wider popularity.

    To me, having the KAV Engine+Definitions, matched up with the superb RAV Definitions and Heuristics in a single double redundancy package, seems like the ideal situation - and in practice for me, it IS the ideal situation. I'd just like to see more testing done with it in comparatives against other products. I've talked with a couple of testing people and they said it would be "Unfair to the other AV's" to include multi-engine products in their tests. WTFo_O Thats the worst excuse i've heard lol...

    Or, how about other less mainstream AVs like CLAMWin and Icarus? Theres plenty of them out there, i'd like to see them tested more often.
     
  20. AgentX

    AgentX Registered Member

    Joined:
    Dec 25, 2003
    Posts:
    44
    Location:
    The Intarweb
    Quit writing it 'eXpendia', or you'll be easily recognized, Mr. Kobra. :D
    - AgentX
     
  21. Arin

    Arin Registered Member

    Joined:
    May 1, 2004
    Posts:
    997
    Location:
    India
    you gotta be kidding me AgentX. do you think so? anyway dear firefighter i knew about the CheckVir testing and the fact is that they are using 199 viruses to test those AV products where the real ITW list has 314 bugs. also some viruses were different so i can't take the test seriously. well i used McAfee and i know how well it can disinfect. you said Trend Micro passed the test huh? try this my friend.

    infect a floppy drive with the Wyx AKA Preboot virus and check it with Trend Micro. you'll find that it can't be cleaned. Wyx is in the Wildlist as well as the CheckVir test list.
     
  22. Paul Wilders

    Paul Wilders Administrator

    Joined:
    Jul 1, 2001
    Posts:
    12,475
    Location:
    The Netherlands
    Ladies and gents,

    Since our goal is to provide everyone room to express opinions and ask questions over on this board, we do allow guests to post instead of obligatory forum membership - at least for the time being.

    Abusing this freedom - for example one and the same person posting in one and the same thread using different guest names - comes very close to trolling. We do not allow that.

    In this context: a final warning for the guest using herman92, Bryce James, Plank, Yealla, AVK Fan and Kalsse, Brice on other threads, as nick names. Either use a registered user name, or use just one guest name.

    regards.

    paul
     
    Last edited: Jun 6, 2004
  23. Paul Wilders

    Paul Wilders Administrator

    Joined:
    Jul 1, 2001
    Posts:
    12,475
    Location:
    The Netherlands
    As of this moment, the guest using all nick names as mentioned above - and added a new one: "Smitty" has been banned from this board. This decision is not open for discussion. The person in question is free to contact me using a non-web based email address for explanation - if needed.

    regards.

    paul
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.