Dr Web and AV Comparatives

Discussion in 'other anti-virus software' started by jrmhng, Feb 3, 2008.

Thread Status:
Not open for further replies.
  1. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    nope, only saying that you dont see people getting infected using drweb.

    sure, drweb like many AV's miss threats.

    and when you hear many stories of cureit fixing peoples machines when they use so-called better AVs, it makes you wonder.

    ie, lets say AV-test shows an AV has 99% detection, and they use over a million threats over the last 6 months alone, you would expect then... this AV is super, not just good, but super. The chances of this AV getting infected, if it can detect over 99% of a million threats circling in the last 6 months alone, the user is safe, true?

    untrue, it is these AV's that are failing to protect the proper threats, almost daily/weekly you see infected machines, constantly from the high score AVs, which cureit will fix for them, for free.

    i fail to see how high the chances are, if 99% of 1 million over the last 6 months, and still get infected, sure it can happen but the chances should be extremely low, and user experiences do NOT backup these test results.

    so really, how can these massive tests really judge an AV?
    i really have hated these big tests all along elite, so i aint jumping on any bandwagon here or anything.

    who are the testers to say if an AV is bad or not, if they dont actually check the malware themselfs?

    who are the testers to play the numbers game, and put 1 av vs another?

    it is NOT their business, and i believe the AV company's know more about the protection for their own users, than playing the numbers game.
     
  2. flyrfan111

    flyrfan111 Registered Member

    Joined:
    Jun 1, 2004
    Posts:
    1,229
    Looks like you said it there as well. Doesn't seem to be too many ways to take that. Sounds like Dr. Web catches everything to me.
     
  3. dawgg

    dawgg Registered Member

    Joined:
    Jun 18, 2006
    Posts:
    818
    At the end of the day, the less users who use an antivirus, the less users who report problems, the less users who get infected. People should use something they're comfortable with and content with using.

    Sounds like this thread is becoming a DrWeb-bashing thread... consider closing?
     
  4. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    yesm out of context tho.

    there are viruses that drweb misses, but while some threats exist that drweb does not detect, who are the testers to say they are a threat?

    doesnt drweb still protect the Russian Ministry Defence?
    i really doubt they would put great faith in drweb on a big contract, if they believed its protection to be crap, or at the lower end.

    even IBK said anti-malware.ru who a well respected tester, yet their results differ soooooo much in comparison, drweb does well.... and arguments of them being Russian is the reason, are just stupid.

    over the last 12 months, apparently drweb is sitting in 3rd for ZERO DAY threats according to shadowserver

    example:

    av-comparatives say Nod32 is the BEST antivirus, both for 2006 and 2007 i believe.

    yet,

    shadowserver (over the last year):

    Nod32: 6th - 84.27%
    Drweb: 3rd - 96.55%

    Anti-malware.ru

    Heuristics:

    Nod32: 59%
    Drweb: 57%

    not really, the massive difference as the av-comparatives says so.

    Rootkits:

    Nod32: 1/8
    Drweb: 5/8

    Removal of infections:

    Nod32: 18%
    Drweb: 82%

    -----------
    but of course, you see Nod32 as the best, because of av-comparatives, YOU BELIEVE IT SO.

    people spend the high amount on a licence, and believe themself to be THE BEST.

    regardless of what you all may think, this is what happens.... these are facts, its playing the numbers game, something which drweb has finally pulled away from.

    i know IBK probably thinks im flaming him, but i have big problems with these big tests and how they are setup, not with IBK himself.

    i think until, these big testers can select a smaller amount of threats, manually check them for active viral code, and then run the tests, including prevention and removal, these tests will ALWAYS be flawed.

    to quote drweb, if 1 single threat is detected as not being a threat at all, the test is flawed.

    simple as.

    ty dawgg, it does seem to be going that way.

    but i can certainly hold my own right here ;)
     
  5. EliteKiller

    EliteKiller Registered Member

    Joined:
    Jan 18, 2007
    Posts:
    1,138
    Location:
    TX
    Exactly, and I quoted that statement in my original response. ;) I realize what CSJ is trying to convey, but the fact of the matter is that he's contradicting himself. According to the samples scanned by Virus Total it appears that Dr.Web has a tough time with early infections. http://winnow.oitc.com/AntiVirusPerformance.html

    CSJ, are you compensated by Dr.Web in any way for your posts on message forums?
     
  6. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    are you serious by posting such results? Elite.

    and of course not, i receive no leg-ups from drweb.
     
    Last edited: Feb 3, 2008
  7. flyrfan111

    flyrfan111 Registered Member

    Joined:
    Jun 1, 2004
    Posts:
    1,229
    NO, this isn't turning into a bashing thread. At least not with/from me. I was just asking to clarify what you said that you then said you didn't say.

    I never said NOD was the best, quite the contrary, if you read some of my posts regarding the latest version and it's problems, my opinion of it and Eset should be readily apparent.

    Conducting tests on such a grand scale has problems inherent in the shear number of malware in the collection, as you point out. On the other hand, as has been said, Dr web is one of the smaller AVs and just by simple statistics will have less people infected using it, mostly because there are less people using it.

    As for the RMD using it, well, the US DHS uses McAfee and I would not say that is the best available here in the States, governments tend to not go with the best, they go with what they can afford to overpay for.
     
  8. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Folks, let's step back a bit.

    Speaking generically, the first thing to examine in making comparisons between different evaluation tests is whether the rank ordering - blurred to a some vague degree reflective of the estimated noise in the test - is internally consistent between tests. If the rough rank ordering is preserved, this suggests that the tests are quantifying similar traits. If the noise adjusted rank ordering comparison signals inconsistency, this merely suggests that the tests are quantifying distinctly different traits.

    A trivial example of this trying to measure how "big" something is - you could measure an object's height (you'll need to define precisely what this is), volume, weight, surface area, and so on. Each of these traits provides an indication of "bigness" in one form or another, however they probe decidedly different aspects of "bigness".

    There are clearly inconsistencies between that views provided by the various sources mentioned above. However, as with the physical example just given, the existence of inconsistencies doesn't equate to one of the results being incorrect.

    As for Dr Web and AV-Comparatives participation, that item is a business call for them. It potentially removes one metric a potential user might employ for product comparisons, but if they (Dr Web that is) have concerns that this test does not adequately portray the real world performance of their offering, it's actually the most appropriate measure for them to take since participation implicitly signals acceptance of all aspects the testing protocols and results.

    As for some of the sources mentioned above, when I examine the various numerical results out there, I must admit a fair measure of puzzlement currently as to specifically what each "evaluation" probes. There are some clear items that emerge when one examines things closely in some cases (e.g. apparent domination of the sampled population by a handful of variants despite the large numbers involved), but how this plays out in reality, is somewhat less clear.

    Blue
     
  9. n8chavez

    n8chavez Registered Member

    Joined:
    Jul 19, 2003
    Posts:
    3,355
    Location:
    Location Unknown
    I suggeest you don't post results that are one year old, as these are today. It'll just make you seem desperate for anything to back up your baseless claims and make it seem like you are attacking Chris.
     
  10. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    i cant say for sure, but i believe this is the reason why Blue.

    a good observation.
     
  11. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    Maybe so; but that does not mean that the missed samples in AV-comparatives are corrupted files or "non-actual threats". AV-comparatives' analysis of corrupted files and detection rates after subsequent removal of such files in their test set revealed a VERY minor difference in detection rates for Dr.Web. With that having been said and the fact that NONE of the other participating vendors have criticized AV-comparatives the way Dr.Web has just makes the situation worse and worse for them and it is harder and harder for me to believe what Dr.Web says.

    Whatever the disagreements between Dr.Web and AV-comparatives may be; AV-comparatives has presented proof that its results are credible with the latest paper. Now let us see Dr.Web produce some proof about how exactly AV-comparatives' test set is flawed.

    Dr.Web is a decent AV; but they seem to be facing a little internal crisis with regards to resource management. I hope they get back up on their feet FAST, or else it won't be good for them.

    Regarding infection rates, we have to consider two things. The first is that Dr.Web holds a pretty small market share. The second, more important thing is that safe surfing will keep anyone safe. Not to mention that a large majority of Dr.Web's users are from Russia, and Dr.Web happens to be *very* good for protection from Russian malware.

    That may be true; but you should also note that no AV-tester would exist right now if the industry didn't need them.

    No tester says which AV is good and bad. The results are posted and the USER decides whether it is good or bad. Kinda in a way where an "A" grade would be considered a good grade in one school but an average grade in another (where an "A+" grade is good) :)

    Again; testers would not exist if they were not required. The importance of testers:

    1) Provides vendors with a rough ballpark figures of their product compared with others (given testing conditions are agreeable)
    2) Provides marketing opportunities
    3) Provides samples for analysis = improved protection for users
    4) All of the above three help the marketing to improve sales and overall company growth.

    It also happens that Kaspersky, the only other worthwhile Russian AV player, is kinda more costly than Dr.Web. Maybe that was a consideration to use Dr.Web, along with the fact that Dr.Web *is* good for detection of Russian malware.

    You must realize that AV-comparatives awards are only indicative; not the last word. In my opinion it looks much more like a PR game from Dr.Web rather than a numbers game from AV testers.

    At the end of the day, Dr.Web still is a decent AV and the point is that if you like it, you are welcome to keep using it. But one must not always believe everything a vendor says; because in the end each company has to sell products and keep users happy. :)
     
  12. Bob D

    Bob D Registered Member

    Joined:
    Apr 18, 2005
    Posts:
    1,234
    Location:
    Mass., USA
    AV tests are all meaningless.
    Their results useless/inaccurate.
    Unless of course YOUR favorite AV is ranked on top.
    Then, mysteriously, the test becomes valid :)
    (Not to mention you get bragging rights)
     
  13. Peter2150

    Peter2150 Global Moderator

    Joined:
    Sep 20, 2003
    Posts:
    20,590
    I think this really hits the nail on the head. Take the top rated AV's which ever that may be, and put them in the hands of a totally foolish surfer/computer user, and probably the odds are high he will find a way to get infected. On the other hand, take the bottom ranked AV's, or even possibly no av in the hands of a safe surfer/wise computer user, and the odds are he won't get infected.

    We tend to worry so much about the safety of the product and rankings, when it is the user that really is the weak link.

    Pete
     
  14. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    A couple of off topic posts removed.

    A suggestion to all - be civil and on topic.

    Blue
     
  15. Blackcat

    Blackcat Registered Member

    Joined:
    Nov 22, 2002
    Posts:
    4,024
    Location:
    Christchurch, UK
    :thumb::thumb:
     
  16. EliteKiller

    EliteKiller Registered Member

    Joined:
    Jan 18, 2007
    Posts:
    1,138
    Location:
    TX
    I suggest that you click on Underlying tabular data ;)
    Statistics valid as of: Sun Jan 20 10:13:07 2008 EST

    It's also important that you understand how the chart differs from the typical "comparative". Antivirus Performance Analysis Method
     
  17. AndreyKa

    AndreyKa Registered Member

    Joined:
    Feb 25, 2005
    Posts:
    93
    Location:
    Russia
    Webwasher: 3765 + 4656 = 8421
    AntiVir: 4306 + 6384 = 10690
    Do they able to use the calculator?
     
  18. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    AndreyKa,

    They can use a calculator (the numbers you added were detected + total, i.e. 4306/6384 = 4306 detected/6384 total or 67.4% detected). That math is fine.

    That's not the primary problem. Look at the total samples number under each product. This reflects the size of the testbed seen by each product. None of them are the same. I'd figure that allowing for some different points at which some products were added to or removed from the group of products examined, you should at least have some products that have seen the same data set. However, if you do a simple list/sort of the total number, they are all unique (except for one's listing 0 - whatever that means). The site seems to be updated rather infrequently. In any event, I'd tend to cast a skeptical glance at the numbers.

    Blue
     
  19. lucas1985

    lucas1985 Retired Moderator

    Joined:
    Nov 9, 2006
    Posts:
    4,047
    Location:
    France, May 1968
    Scanners which are not available at Virustotal :)
     
  20. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    which begs the question of why they are listed at all in the summaries....

    Blue
     
  21. lucas1985

    lucas1985 Retired Moderator

    Joined:
    Nov 9, 2006
    Posts:
    4,047
    Location:
    France, May 1968
    Because they're listed in other scan services (Jotti, etc) and they're waiting their inclusion in Virustotal (?)
     
  22. Threedog

    Threedog Registered Member

    Joined:
    Mar 20, 2005
    Posts:
    1,125
    Location:
    Nova Scotia, Canada
    I dont get too excited over any of these published tests. I have tried most of the ones of any note and done my own testing and evaluating and in the end decided on Dr Web. The one criteria that I had was cleanup ablilities and Dr Web really impressed me on that. But as usual...your own mileage may differ.
     
  23. NAMOR

    NAMOR Registered Member

    Joined:
    May 19, 2004
    Posts:
    1,530
    Location:
    St. Louis, MO
    Kind of sucks that DrWeb is not being tested. for the most part I use these tests to gauge how an av has been performing over a given period of time. I'd like to know if its detection has been consitent, improving, or dropping (when compared to its previous tests). I usually don't use these tests to see if AV A is better than B.
     
  24. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    That they are not tested doesn't mean that they don't want to be tested because their "score" is not good there. It can have completely other reasons than only that. For example F-Prot will also not be participating in AV-Comparatives.
     
  25. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    with the difference that the reason given by F-Prot to me makes sense and is understandable ;).
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.