new results from AV-Test.org (Q1/2008)

Discussion in 'other anti-virus software' started by Valentin_Pletzer, Jan 22, 2008.

Thread Status:
Not open for further replies.
  1. Coolio10

    Coolio10 Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    1,124
    Sorry for the offtopicness but i just noticed the experts are back.

    Welcome back Paul and Inspector Closeau :D
     
  2. computer geek

    computer geek Registered Member

    Joined:
    Oct 6, 2007
    Posts:
    776
    Just because they got 99% in one test does not mean it protects you 99%, there are other tests and plus, how likely is it that these are indeed the samples you get infected with in the wild? Oh and coolio, if your saying you salute m:D c:D a:D f:D e:D e:D , i am honoured.:D
     
  3. Diver

    Diver Registered Member

    Joined:
    Feb 6, 2005
    Posts:
    1,444
    Location:
    Deep Underwater
    Does this thread have something to do with Nod32 having a relatively poor showing, because if it was McAfee or something like that nobody would care.
     
  4. Coolio10

    Coolio10 Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    1,124
    Its been like that for a while, get used to it :) .
    If kaspersky or nod32 get a bad score then all hell breaks loose.
    If they did good then everyone would be saying congrats to eset instead of accusing the results of being bad.

    Exactly, who cares about norton or mcafee when theres nod32 or kaspersky........;)

    Its the food chain of antiviruses.
     
  5. computer geek

    computer geek Registered Member

    Joined:
    Oct 6, 2007
    Posts:
    776
    actually yes, nod did do quite bad in (beaten by m:D c:D a:D f:D e:D e:D :p :p :p ) this thread but still, mcafee is a big company and some people do care about what they get. ;)
     
  6. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    what i find funny is, how they always refer to the 1%, as if they were unlucky.

    regardless of how good IBK and Marx think they are, im not buying it, that there are good high numbers of junk files, non-executable files, corrupted files, memory dumps etc.

    however, IBK is kind enough to send off the files to the vendors which is great, and yes i aint saying drweb is perfect.... a few thousand are always added from the DVD's, but also... much is also discredited for not being useful by drwebs own AV LAB.

    these high tests which are sorted using automatic tools, and 'what if's' are just not reliable for the real threats that are circling the internet, regardless of whether Marx wants to state the million 'varients' are all from the last 6 months.

    when i constantly see drweb fail these tests, it makes me wonder what actual junk many of the AV's are adding, especially the files drweb have checked and verified for themselfs to be useless, many detect them. This is why i wonder about what junk the 99% av's are adding, crapware to their database. (let me guess, its these AV's that have lots of signatures per day arriving, a large MB database with slower downloading updates, go figures, but the customer, you and I will always think these are genuine threats.

    Cartoonboys latest thread is a perfect example of a real world threat, that most AVs cant handle properly. Deleting the archive is not good enough for prevention/cleanup or whatever you want to call it.

    There are many imitators in the AV market, companys that either use someone elses technology or simply copies it, and calls it their own, and there are companys which always develop their own technology.

    of course, i could go on for days and days with this matter, the same topic seems to pop up everytime one of these large-tests arrives. I too, also understand why Paul has entered into the argument, because of Nod32's poor showing aswell, & i also understand why alot of people dont enter the argument, because if their AV shows 'good', they live a happy existence on this board, and besmirchs anyone who challenges to the results, even if reasons are given.

    end of rant for now, i shall let Paul have a go ;)

    but im sure something else will come to mind, in due course.

    peace :D
     
  7. computer geek

    computer geek Registered Member

    Joined:
    Oct 6, 2007
    Posts:
    776
    i don't know if you were trying to say this but in me opinion, these people should be concentration on adding proper detections, not adding old and gone and no threat viruses into their databases, but thats marketing. they need this to get money for old protection...
     
    Last edited by a moderator: Jan 23, 2008
  8. Joliet Jake

    Joliet Jake Registered Member

    Joined:
    Mar 1, 2005
    Posts:
    911
    Location:
    Scotland

    How on earth can someone or a group get through that many submissions?

    OK, you get 2,300 samples in an hour, how do you check them to see if they are malware or not? By 'hand'? By this I mean actually investigate the code or is it run through some kind of scanner?

    My point is, unless each one is looked at individually by someone expert in detecting whether a sample is actually malware what are you relying on to tell you if it's malware or not?

    To me who has no knowledge of how this is done it looks humanely impossible for people to go through that amount of samples verifying each one which leads me to the following question-

    IN a sample size of tens of thousands to a million, are all of these verifiably malware? Could a percentage be false positives, corrupted or otherwise inert?

    Thanks in advance.

    JJ
     
  9. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    In the end what does it matter. None were a 100 percent. It just goes to show, no matter which you use, you still need to add to each some backup.;)
     
  10. Joliet Jake

    Joliet Jake Registered Member

    Joined:
    Mar 1, 2005
    Posts:
    911
    Location:
    Scotland
    It matters to the companies and people who buy their products. If samples in a test, any test, are not all verifiable malware then what can you say about the results of that test?

    If anti virus company (A) adds any old tat and calls it a definition but anti virus company (B) only adds verified malware then any test that includes a load of tat samples is going to skew the results in favour of anti virus company (A). (not purposley)

    Which brings me back to my question of who verifies all these samples and how do they do it!
     
  11. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    @Joliet Jake: I do not get 2000+ samples per hour.
     
  12. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    Yes and no. Look at Computer Associates, ranks always near the bottom, but focuses on the corporate enviroment and it one of the largest around. I dont think they sit by twiddling their thumbs waiting on tests to be published. Same for Panda, in the middle but still focuses on selling their products to companies. I think sells to the lone individual is small in the scope of profits. But I cant disagree with you either. My statement was more inline with us, the individual.

    Dont focus on the percent achieved, but the large number missed. And that is what is scary, even for GData and the others. It just says dont think by buying number one, your ass is covered.
     
  13. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    automated tools, but i aint sure how they do it, and i doubt you will find that info.
     
  14. Joliet Jake

    Joliet Jake Registered Member

    Joined:
    Mar 1, 2005
    Posts:
    911
    Location:
    Scotland
    Yeah I know, it was Marx. Sorry I quoted your post and should have made it clear in my post.

    My post was a more general one asking how all this is done rather than being directed at any individual. ;)
     
  15. Joliet Jake

    Joliet Jake Registered Member

    Joined:
    Mar 1, 2005
    Posts:
    911
    Location:
    Scotland
    Who 'writes/configures' the automated tools? Can an automated tool tell you if a new sample is malware or not with accuracy?

    For me, until I know how these things are done how can I trust the conclusions of any test?
     
  16. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    I am sure there isnt a text book written on the proper procedures for testing. Therefore you are left with that old guiding principal of "trust". You and only you can decide who you trust and whatever their measurement and procedures are for testing. I trust IBK, others may disagree. It is a crap shoot and your question is very valid, but unfortunatly one that will never get answered to the satisfaction of all.;)
     
  17. Joliet Jake

    Joliet Jake Registered Member

    Joined:
    Mar 1, 2005
    Posts:
    911
    Location:
    Scotland
    Damn!

    Nah, I just wonder how so many samples can be verified accurately. Surely to goodness someone on here knows it's done.:D

    My original post was not about IBK (although I did quote his post it was the huge sample number a month that got me thinking), but about how that many samples could possibly be checked.
     
  18. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    simple, you cant.

    ive always said they should be a guide-only.

    i do like IBK's tests though, mainly for his presentation/reports, and shall look forward to the next one :thumb:

    but, relying on these as a detection test? ive never done that, as i realllllly dont like these massive tests, but hey, thats just my personal opinion.

    i do find them interesting, but quite useless for the people who purchase licences for AVs, because....

    fact is, people look at these results and decide their fate when trying/buying their antivirus, we know everyone does this right?

    and on these massive tests & the way they are performed etc, i find this extremely misleading.

    edit: --- none of this is a personal jibe at IBK are Marx, just what i think about these big tests ;)
     
  19. Diver

    Diver Registered Member

    Joined:
    Feb 6, 2005
    Posts:
    1,444
    Location:
    Deep Underwater
    I am used to it, I thought that I would say something about it. There are a lot of fan boys with the AV's. Its amazing as it is very hard to really know what is better protection. No wonder the mods had to ban A vs B threads.
     
  20. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    Some of the tools are written by people working at AV companies and other tools are written/coded by the testers themselves.

    Yes and no. They do identify and remove lots and lots of corrupted files, but thousands still remain after the tools are done with their checking. At that point, I know that testers at least make some effort to check samples themselves, but obviously that is a tedious and difficult job so every sample cannot be checked. In any case; I know that at least for AV-comparatives; the resultant number of corrupted files in the final test set are not very significant with regards to the detection rate of various AV products (and there will be a paper next month on AV-comparatives which will show you just how significant the differences would be).

    I would not place too much trust in what Dr.Web says. After all, one needs to realize that they need to keep their customers having trust in their product......But dunno. To be honest, until AV-comparatives' paper is released in February; one cannot take Dr.Web's ramblings about the AV-comparatives test set as truth.

    Granted some AVs do add crap; but from my submission experiences I know very well that many, many vendors other than Dr.Web also do not add "crap". Just because Dr.Web suddenly is scoring somewhat low does not mean other AVs are adding crap and this crap is what is making them score well. Like I said; any inferences or conclusions made from Dr.Web's comments on test sets can only be verified after the paper from AV-C has been released, which will show the impact on the statistics due to the corrupted files.

    AV-test is a different story; I do not know much about how they are handling their test set. :doubt:

    Ahh, famous Daniloff quote :D

    But we never did know who those five were who developed their own technology (apart from Dr.Web of course)....:)
     
  21. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    Well, maybe we don't know what better protection is, but some avs , according to tests, make even HIPS pale. :D

    Anyway, always interesting watching so many people getting interested in these tests. I guess this is why such tests occur.

    P.S.: I believe in Jotti's. Makes tests pale. :D
     
  22. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    the corrected results for e.g. Dr.Web in the August test is 89,94%. ranking and awards remained unchanged for all products.
    will try to compose and release the document during next weeks.
     
  23. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    Means (from a vendors view) "Royal Major Pain In The Butt" Trojans.
     
  24. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    Folks, keep in mind that those AV Tests (no matter from whom!) only gives you a briefly OVERVIEW. A OVERVIEW supposed to be including *ALL DIFFERENT USERTYPES* To get your own "perfect" AV Test the approach would be as follows:

    You write down in a paper:

    1. Your Operating System (That the tester knows what platform he has to test)

    2. All your installed Software (That the tester knows what types of additional application data you can use) Example: If you DON'T have Excel/Word or something similar installed the chance is much lower that you get infected by a XLS (Excel) file that contains a virus since you have no application to "activate" this virus. You'd have this virus on your system but it would be "harmless" on your system. Not nice, especially if u intent to share that file but still *YOU* won't be affected by it.

    3. Most of the Malware comes via Internet Surfing / Emails / Exploits
    That said the tester needs to know *YOUR* Internet behavior. For someone who does daily P2P downloads the "optimal" test looks COMPLETELY different from the test the tester had to perform for a normal office workstation. Another example: If you play World of Warcraft you'd be pissed to maximum when you find out that your av missed a WoW trojan and your gaming account got hacked and your character is naked and nearly 2 years of raiding time in instances are "gone". That wouldn't be such a big drama on another machine were no WoW is running. THAT BOILS DOWN TO THE (ONLY VALID ONE!) CONCLUSION: The IMPORTANT virus is *ALWAYS* the virus that affects your own system in a negative way.

    AND NO ANTIVIRUS TESTER IN THE WHOLE WORLD CAN DETERMINE THAT IN AN OVERALL TEST FOR *YOUR* SYSTEM. Period. Those tests are made to cover "as much as possible" users, but they DON'T reflect the best av program for *YOUR* personal use!

    You have ofc a higher chance that a program that covers more viruses in overall detection also detects this specific virus you have to deal with. BUT (and that is no joke!) sometimes a av program that scores medium results in such tests would be the better choice for *YOUR* personal requirements.

    If you play MMOG (Online Games such as WoW etc) your best choices would be Microsoft / F-Prot and 2 other av programs. As for Microsoft and F-Prot i can confirm that. I do know MS guys from the lab which are playing WoW (They play alliance :D) and i know who from our viruslab plays it ;) And yes, i play myself a Blood Elf Rogue in Gladiator Set :D So it's most likely that we encounter such malware (and spend attention to it!) as soon as we hear/see something. We do not put highest priority on it, but it gets included since several employees have a "personal" interest in it :D RISING Antivirus from china otoh has a terrible good detection with LINEAGE trojans. No wonder, that is in china more popular then WoW and most of the trojans are written in china/korea for it.

    FIND A WAY TO SUM UP YOUR BEHAVIOR/SYSTEM and find the suitable AV for it! That's YOUR job and not the antivirus testers!
     
  25. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.