Dr Web and AV Comparatives

Discussion in 'other anti-virus software' started by jrmhng, Feb 3, 2008.

Thread Status:
Not open for further replies.
  1. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    @trjam: :thumb:
    if any company prefers to do not take part in some tests, it is their decision and it has to be respected.
     
  2. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    thanks. I know my new and permanent one:rolleyes: , isnt going anywhere.;)


    And Chris, look at the bright side of things. At least you wont have to wait every 3 months to see how yours did.:D
     
  3. Hangetsu

    Hangetsu Registered Member

    Joined:
    Jan 9, 2006
    Posts:
    259
    I completely agree. I was only saying that to some it APPEAR to be something negative, that's all. That, and I see value in independent testing when making security product decisions. I tried in my posts to make clear I DON'T think Dr.Web is a bad AV or trying to do something shady, only that it could come off that way. If I made the impression that it WAS doing something wrong, I apologize.

    Personally, I'm enjoying the topic and debate though.
     
  4. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    lol, me too.

    arguments:

    av-c: none, their tests are 'near perfect'

    drweb: tests are flawed, contains junk files, files are not checked manually and realworld threats that are a concern to customers, are v.limited in the tests.

    --------------

    i shall watch it develop, i think i give enough Drweb posts on here. ;)
     
  5. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    i use IRC ALL THE TIME,

    mIRC

    /server irc.allmp3s.net
    /j #spidersweb
    /j #allmp3s

    and i feel completely safe and sure that drweb will keep me protected if anything should get through my client, but my mIRC is setup not to receive unless i authorize it, so i cant see how i could get a problem.

    regardless of whether drweb participates in such tests, ive complete faith that they will keep me secure. :thumb:

    in my experience, mIRC is more safe than MSN or Yahoo messenger, which in itself... is more shocking, because millions use them, many children use them. etc.
     
    Last edited: Feb 8, 2008
  6. bellgamin

    bellgamin Registered Member

    Joined:
    Aug 1, 2002
    Posts:
    8,102
    Location:
    Hawaii
    It was my understanding that AVs must pay AV-C in order to be included in their tests. Am I wrong? If not, then DRW's withdrawal from AV-C might have something to do with simple economics. There might be better, less expensive advertising milieu for DRW -- especially since an appreciable segment of their customer base probably resides in Russia. (AFAIK, DRW is a very small company, by the way.)

    Another thing that I *think* might have bearing on this issue --- namely, it is my understanding that all participants in AV-C's tests receive info (specimens?) of the malware they did not detect. If so, it seems rather strange that DRW has evidently not responded/acted upon the full body of these specimens. I wonder why? Too small staffing at DRW? Or -- are they *unimpressed* with many or most of the specimens? And -- if they are intentionally NOT doing very much of anything therewith -- I wonder WHY?

    BOTTOM LINE- We simply do not have the facts & background that would be needed in order to reach informed conclusions about DRW's decision. Such being the case, those who allege that DRW's withdrawal from testing infers "something to hide" or such, are functioning as jurists who lack evidence. My attitude toward such jurists is very similar to the attitude of a tree toward wandering pooches. ;) :cool: :D
     
  7. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Not really; as always, my opinions of a product are formed by my own testing, or those of people I can personally verify. Ikarus kicks even Avira's ass when it comes to detection, albeit at the cost of higher FPs. Turnaround time for new malware could easily give Kaspersky a run for its money as well.
     
  8. Hangetsu

    Hangetsu Registered Member

    Joined:
    Jan 9, 2006
    Posts:
    259
    Again, its not a question of something to hide, but rather electing to have one's software in some independent tests and not others, oddly enough being the one(s) that are unfavorable toward the software.

    Does that mean they have "something to hide" necessarily? Of course not. I've never said they were doing anything of the sort. But, if I'm shopping for a piece of software, why would I take the "question mark" when there are competitors that would not have it?

    We are talking about security software here, and also personal decisions about what one needs for their machines. Independent tests are but one factor of that decision, but its a valid factor.

    I personally don't care about their reasons (or any other vendor's reasons) for not partaking in a test. For ME, individually, as someone who uses AV-Comparatives in his decisionmaking, its a negative. That does NOT, in any way, make DRW a bad product. It just means that I wouldn't buy it.

    Having said all that, I purchased OneCare in a moment of stupidity... Perhaps my critieria set needs an overhaul anyway... :D
     
  9. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    drweb said it has nothing to do with the introduced fee or the size of their company. i also told them that i would include them for a very low fee, but they declined. no vendor withdraw from the test due the fee.
     
  10. jrmhng

    jrmhng Registered Member

    Joined:
    Nov 4, 2007
    Posts:
    1,268
    Location:
    Australia
    Are you are in a position in reveal in what order of magnitude you are charging AV vendors?
     
  11. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    geez fols, I thought this thread was settled and finished yesterday. Closing. Oops, I cant do that.:rolleyes:
     
  12. Hangetsu

    Hangetsu Registered Member

    Joined:
    Jan 9, 2006
    Posts:
    259
    Personally, I'd rather discuss it moreso as the pros vs cons of independent testing when making a decision on security products. This could apply to ANY vendor, not just Dr. Web.

    I'm starting to get a guilt trip because I'm not not NOT trying to bash Dr. Web, dangit! I'm just debating the reasons I use independent testing in my decisionmaking - That part of the thread is pretty good, and I like hearing other's positions (as I know very well I'm not always right).
     
  13. bellgamin

    bellgamin Registered Member

    Joined:
    Aug 1, 2002
    Posts:
    8,102
    Location:
    Hawaii
    Soooo... start a new thread on that topic if you wish.

    @IBK- thanks for the info. You deal justly & fair, as always. Live long & prosper. Shalom
     
  14. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    As bellgamin noted, this might warrant a separate thread. That noted, I look at the tests, but realize that they are an imperfect quantitation of what's important to me.

    Let's step back a bit. The perfect signature based product is one that possesses a single signature - the signature for the next piece of malware about to land on your PC. OK, that's not an achievable product without time travel, so what is achievable?

    If a product detected all known samples, it's probably more comprehensive than one that detects, for example, 10% of all known samples. However, if that 10% sample set contains every piece of malware that you will ever encounter, well, the performance of these two products in your hands will be the same. Therein lies the user's dilemma in basing a decision solely on this type of test result provided by challenge tests. Obvious, additional strawmen could be put forth, including completely valid one's in which the in-field performance of the "more comprehensive" solution is lower than the product which provides "lesser" coverage (i.e. it has more signatures, many of which are irrelevant, but misses key important ones). Unfortunately, I don't know where reality sits in this great expanse, which is one reason I tend to emphasize looking, at most, at the broad certification levels and not the often negligible % detected differences that serve as the fodder for most of the discussion here. I realize that even this does not completely address the points that I've just made since performance across certification levels will differ according to the user specific challenge as well.

    One trending behavior in the independent tests that has gained traction of late is using "current" malware collections in which the distinct samples number in the millions. On the face of it more is often better, but there is no way that junk samples are not increasingly making their way into these sample sets when the numbers reach these heights over relatively short collection times. How much? Who knows. Does it effect the results in a minor or major way? Again, who knows.

    Rough consistency across the various independent tests implies that the effects, for the most part, are minor. However, that's not always the case. For example, some time ago I performed a direct comparison between www.av-comparatives.org and www.AV-Test.org results in which the size for both testbeds were about 500,000 samples. The average of the absolute values of the % detected differences was 1.4%, which indicates fairly good agreement. However, the variance for two AV's (F-Prot and NOD32) were 8.0 and 8.4% respectively (the 1.4% value previously quoted excluded these two results only). The variance for Dr Web was also somewhat high at 4.4%. So while we had a number of results in agreement, we also had a significant disconnect in some instances. Naturally, this begs the question "How big does the difference have to be to be genuinely different?" Is it 4%? 8%? 20%? Once again I'll emphasize that these numbers are a lot larger than difference generally obsessed over by discussants in heated exchanges following publication of any given test.

    For this reason, I increasingly view these results as a reasonably objective but crude guides to performance and for which the translation to field use performance is not quantitatively apparent. In the absence of other objective and wide ranging figures of merit, they are the primary snapshots available, but they do need to be assessed cautiously if one is attempting to acquire a global sense of product performance.

    Blue
     
  15. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    If you allow "customized" prices then this develops into a circus. The bigger companies will ask themself "is it our fault that we have to pay more just because we have a few employees more?"

    IMO prices should be the same for all no matter what company size.
     
  16. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    Sad, but IC is right. A test is a test is a test. And no one should be penalized for their size. I guess next the fee will be based on how well they detect.:thumbd:
     
  17. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    with company size i do not meant number of employers.
    e.g. clamwin has not the same resources like e.g. like trendmicro, but also clamwin should have the chance to take part (if they would score good).
     
  18. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    I respectfully disagree. You are investing the same time and effort into the test regardless of the vendor, therefore your pricing structure should be the same. When you start offering a menu pricing structure based on any reason, then yes, I can see where a vendor would question the integrity of the test based on the fee. I am not saying this is happening, nor questioning your integrity. All I am saying is your policy you have set forth is one that can make things looked skewed and really there isnt a need for it.

    If Clam wants in, tell them to improve their product which will increase their sells, which will allow them to pay the same as all. Should I pay more for a Big Mac at McDonalds because I make more money then Joe. No.
     
  19. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    did i forgot to mention that the fee is not for the tests but for all other ("internal") services provided (incl. use of logo etc.)? who pays lower may be tested anyway, but will e.g. miss some of other services (like using the logo etc.).
     
  20. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    I understand, but to me. Once anything, and I mean anything is different from A to Z for any vendor, then that allows anyone to discredit your findings, right or wrong.

    Fees based on a fictious scenario
    Clam pays A
    99 percent pays B
    One company is real generous to you and pays C

    See where I am heading.
     
  21. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    as results / missed samples are crosschecked, results can not be faked or invented. if someone wants to discredit someone, he could also say that vendor A pays more than vendor B because it scores higher, even if all vendors pay the same.
    no av vendor would want to take part in a test which can be influenced by money. that i am genuine and independent should be known i think. if money would have an influence, i would have allowed e.g. pctools and sunbelt to take part this year, but i declined and told them that it is probably better to wait until their products score higher.
     
  22. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    Its not the perception of the validity of the tests based on your testing methodology, it is just the validity of equality based on your fee structure, that someone could question and from there, spiral downward.

    Hey man, it is your company, you are entitled to do what you want, I trust the tests, but I have to say that in your line of work, I would want everything totally equal from A to Z.;)
     
  23. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    well, it IS equal in regard to the tests. the possibility of discounted rates is there for special cases, also e.g. to see if a vendor does not want to participate due the amount of the fee or due other reasons.
     
  24. bontchev

    bontchev AV Expert

    Joined:
    Nov 13, 2005
    Posts:
    38
    Maybe I am mistaken, but it is my understanding that the company I work for (FRISK Software International) also withdrew from these tests. The reason was indeed "simple economics" - it was decided that the quality of the tests simply did not match the (substantially increased) fee.

    I also agree with IC above - all companies should be charged the same fee, no favorites. Doing otherwise will undermine the credibility of the tester (unrelated to any undermined credibility caused by low-quality tests - i.e., it would happen no matter how good - or how bad - the tests are). It will create the impression that the rich companies are "buying" better-looking test results.

    Regarding the argument that this will put the small (and poor) companies at a disadvantage, my response is that the price of the tests must not be unrealistic (as, I believe, the price of the tests in question currently is) and it must be affordable by everyone.

    Case in point - Virus Bulletin's very professional tests don't charge anything. Still, they have other revenues, so a small fee for a stand-alone testing company with no other income would be reasonable. Not the kind of fee Andreas Clementi started asking, though - especially not given the quality of testing he is offering, sorry. In that aspect (quality of these particular tests) I am forced to agree with Dr Web's stated position to the letter.

    Regards,
    Vesselin
     
  25. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    in case of F-Prot is was not "simple economics". (at least, this is not what bryndis told to me).
    VB charges also not for the tests but for the e.g. magazine, based on companies revenue.
    the amount of the fee suggested by av-comparatives was set based on what various vendors told me would be adequate to ask, also compared to what some others charge. if can choose, ok, all vendors same fee. i just thought it would be more fair.
    quality can not improve if there is no money to pay hardware and employers adequatly. running a test just using samples from the wildlist is something that 1 person can do in 1 day and there is not much to do wrong with it, calling that professional is a little bit oversize. even VB tests include every few months wrong results due bad samples and aVTC which focused on quality testing was also by far not completly free of errors (and you [dr. bontchev] should know that better than anyone else).
     
    Last edited: Feb 11, 2008
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.