Dr Web and AV Comparatives

Discussion in 'other anti-virus software' started by jrmhng, Feb 3, 2008.

Thread Status:
Not open for further replies.
  1. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    In reply to your post and to some others regarding garbage.
    I don't know who "in the industry" considered AV-C tests to be reliable, I know that Dr.Web people have always addressed questions to AV-C. The discussion used to be sharp - but basically in the methodology field.

    Last year Dr.Web was at the bottom of the AV-C and that was the reason they started to look into the collection. No issue, Dr.Web was bad enough to be the last in the tests missing several tens of thousands of samples (caught by others at up to 98-99% rate). But the point is here:

    when you miss some 50K of files called "malware" you have a homework for a couple of hours - pass them all to a robot and it will add them to your base and you will be very good-looking at AV-C. However, this is possible when you are 100% sure that all those thousands of files ARE REAL malware. Otherwise, you have to explore each sample "manually" which makes your task a little bit harder. But still, this can be done if you are 100% sure that this manual work will make your product not just good-looking in the AV-C test, but stronger and safer for your users. Unfortunately, that was not the case.

    Dr.Web's virlab started exploring the first portion of files that were subm,itted by AV-C (i think, file infectors) - item by item. And found 10% of confirmed garbage. But the best players in the AV-C scored more than 98% at file infectors! Which means that some garbage happenned to be detected as malware! Which means that they just added those files automatically (from previous tests), though in their daily work they would never do that.
    Probably, you can now understand the reason for not taking part in the AV-C.

    The garbage issue was discussed in Bilbao, where AV companies gathered to address the testing quality problem. Doctor Web will certainly follow the discussion. I think, it is simple. When you conduct a wine contest - you make sure that all liquids that are submitted to the contest are wines. This is the first and basic condition. The same rule should be applied to Av tests. Otherwise it is not fair - you claim all files that are scored as "missed" are malware and should be detected, but in fact they are not.

    P.S. doctor web is really a small company, with 120 people working there.
     
  2. Paul Wilders

    Paul Wilders Administrator

    Joined:
    Jul 1, 2001
    Posts:
    12,475
    Location:
    The Netherlands
    Well Larry, I'm afraid you'll have to live with our policy. No problem in you disagreeing, but not open for discussion either.

    regards,

    paul
     
  3. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    ok, good post. But in response take Eset, very prudent like the Web at adding signatures just blindly. Yet they score high. So their detections are not based on signatures but on their hueristics. So based on your comments it would seem Esets hueristics detect just about anything including FPs, when in reality most of us know this not to be the case.
     
  4. larryb52

    larryb52 Registered Member

    Joined:
    Feb 16, 2006
    Posts:
    1,131

    your making it personal, I wasn't. Your attitude is not appreciated when all I was doing was having an intelligent conversation. Don't bash me because I have an opinion, sorry if I spoke out of turn but it was how I was brought up, that is that everyone can have an opinion. I respect yours,
     
  5. Paul Wilders

    Paul Wilders Administrator

    Joined:
    Jul 1, 2001
    Posts:
    12,475
    Location:
    The Netherlands
    For the record: Mr. Bontchev is posting on a personal title and therefore not representing the official stand from FRISK on testing issues as well as the reason FRISK bailed out this time. We'll invite someone from the FRISK Board of Directors to express their official view on this matter.

    regards,

    paul
     
  6. Bob D

    Bob D Registered Member

    Joined:
    Apr 18, 2005
    Posts:
    1,234
    Location:
    Mass., USA
    Thanks for the succinct explanation for the benefit of laypersons.
     
  7. bontchev

    bontchev AV Expert

    Joined:
    Nov 13, 2005
    Posts:
    38
    It's difficult to produce exact numbers because this has been going on for years and I didn't keep track of the exact numbers every time. Furthermore, since at FRISK I handle only macro- and mobile malware related issues, I got to see only that part of the "missed" stuff. From what I've heard from the guys at the Virus Lab who had to process the rest, it was of pretty much the same "quality" - but, you realize, this is just hearsay.

    Speaking of the stuff I've seen, I can reliably say that at least some 90% of it had no place in a virus test set to begin with. (I mist emphasize - I'm not talking about his entire test set - which I haven't seen as a whole - but only about the stuff that was sent to us as "samples missed by your scanner".) Just because our scanner happened to detect the rest doesn't necessarily mean that they were proper virus samples - but I can't tell for sure without examining them first.

    Again, speaking of the stuff I've seen, there were all the typical mistakes made by incompetent virus collectors. Non-functional programs, corrupted executables, files with wrong extension extension (e.g., SIS files with EXE or APP extension; forget which - this is as wrong as renaming a ZIP archive to EXE), partial disinfections, non-working stuff, etc., etc., etc.

    I still have somewhere in my backlog a bunch of files labeled "macro stuff from Clementi" which are of such a low quality that they are not only not viruses - they don't deserve even to be classified as "intended". It would be a very rare event (like one in a thousand) that I'd find a genuine new macro virus among the non-working crap. Sadly, this meant that I was forced to analyze it all, looking for these "pearls" - which, as I assume you realize - didn't amuse me at all.

    And this has been going on for years, and years. I never saw any improvement. Sure, when we told him "this is crap and shouldn't be in a test set", he removed it without objection. But this is wrong, too. He removed it because he just believed our word. We're a biased side in such things - the proper thing to do is to verify our claims by analyzing the files and confirming what we were saying. Anyway, the crap was removed - only to be replaced with more crap at the next test. As it is, he just shifted to the anti-virus companies the task of sorting out his test set. :-(

    A competent tester must always be able to explain why a particular sample is in his test set. "Scanner X detects it", "It was in Company Y's monthly virus collection" and "It was submitted through the FooBar on-line scanning service" are not good explanations. Most empathically not. A competent tester must be able to analyze the sample and say "It is there because this code here performs self-replication and that code there contains the polymorphic engine and my test criteria state that my test set consists of polymorphic viruses".

    Andreas, if I pick a random sample from your test set, are you able to analyze it and explain to me exactly what of its properties have validated you placing it there? Because, if you cannot, you're not qualified to test anti-virus programs (or at least not qualified to construct test sets for such tests). And everything that I've seen from the samples coming from you tell me that you're not able or not willing or do not have the time to analyze them. We aren't talking about honest mistakes here. We're talking about mindless pilling of unanalyzed crap.

    Regards,
    Vesselin
     
  8. Paul Wilders

    Paul Wilders Administrator

    Joined:
    Jul 1, 2001
    Posts:
    12,475
    Location:
    The Netherlands
    No - I'm merely pointing out our policy is not open for discussion, wether anyone likes it or not.

    I fail to see bashing you in any way - read back what I've posted.

    regards,

    paul
     
  9. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Severyanin, I am afraid your explanation makes no sense at all.

    First of all, during the two on-demand comparatives of 2007 your company's product scored 92% and 95% respectively on file infectors (termed by AV-C as 'Windows viruses'). Secondly, page 8 of AV-C's methodology document states that only missed samples are sent to the vendors.

    Of the 63,029 file infectors used during the August 07 on-demand comparatives, DrWeb detected 60,326 of them - and, quoting you, 10% of the 2703 missed samples were "garbage". By simple math one can easily work out that the "garbage" samples, in actuality, consist only of 0.43% of the total test set for file infectors.

    Is this 0.43% the reason for all this hubris and hue-and-cry?

    Also, there's a serious mathematical error in your explanation: just because that 10% of the samples that DrWeb missed are "garbage", does not necessarily mean that products who score 98-99% are detecting those "garbage" as malware.

    Just some food for thought. :shifty:
     
  10. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    One positive point of a small company: It takes only "seconds" for such matters :D

    Here's the official statement:

    Vesselin speaks *NOT* official on behalf of FRISK Software International. That would be either Fridriks, Bjorgs or my job (as you can see now)
    Fridrik has asked me to reply here and to clarify that.

    Vesselin is of course entitled to give his own thoughts a go but please don't see them in direct contact with *our* company. Vesselin is old enough (and holds even 2 passports) that he can write things on his own for PERSONAL reasons.

    Mike, FRISK Software International.
     
  11. bontchev

    bontchev AV Expert

    Joined:
    Nov 13, 2005
    Posts:
    38
    I wholeheartedly agree with the rest of your post; just a small correction regarding the above. It (the above, that is) is not necessarily true. Sometimes a scanner would detect non-viruses not because they were added mindlessly to its database but for many other reasons. For instance, they could be corrupted known variants that don't work - but the scanner detects in them the "known" part. In other cases they are Intended - i.e., contain replication code that happens not to work - but the fact that it attempts self-replication is caught by the scanner's heuristics.

    I've said it many times but it bears repeating. Just because a scanner detects or doesn't detect some file bears absolutely no guarantee whether the file in question contains a virus or not. Yes, I know that it's counter-intuitive and that most of the time scanners detect viruses and don't detect non-viruses - but THERE IS NO GUARANTEE. As a consequence, any tester who relies on scanners to tell him whether to put a sample in his test set or not is only demonstrating his gross incompetence. This is wrong on so many levels that I don't know where to begin. It puts non-viruses there. It omits real viruses. It biases the set towards the scanner with the highest level of detection (which might mean - the highest level of false positives). Yet this is precisely what many so-called testers do:mad:

    The only proper way for the tester to proceed is to analyze every single sample, to replicate it, and only then to put it in the test set. (And replicate it for more than one generation - because some pieces of crap replicate only once.) If a tester does not have the technical competence level to do that, he is not qualified to test anti-virus programs, period.

    Regards,
    Vesselin
     
  12. egghead

    egghead Registered Member

    Joined:
    Aug 27, 2005
    Posts:
    443
    Location:
    The Netherlands
    This thread has made me curious; I have installed the Doctor and will give it my own tests :)
     
  13. Bob D

    Bob D Registered Member

    Joined:
    Apr 18, 2005
    Posts:
    1,234
    Location:
    Mass., USA
    AV-C, the generally considered benchmark / definitive authority of AV testing's methodology is now suspect?
    Oh my God, it's Armageddon at Wilders!
     
  14. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    I am trying to decide between F-Prot and Dr Web after today.;)
     
  15. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    ouch, some harsh comments from the experts.


    an enjoyable read though, I must say
     
  16. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    The ends justify the means. If the remarks by a DrWeb employee are to be believed, a corrupt sample rate as low as 0.43% in a sample set that large strikes me as quite impressive. I for one do not care for the roundabout ways some might attempt to discredit the tester - if the figures in the results table are fair, unbiased, and accurate with reasonable allowance for human error, then I'm perfectly satisfied.
     
  17. Paul Wilders

    Paul Wilders Administrator

    Joined:
    Jul 1, 2001
    Posts:
    12,475
    Location:
    The Netherlands
    With due respect - this statement makes no sense at all. Bontchev is posting as an employee from FRISK over here, as is plain to see for all. Therefore, all Bontchev is posting in this thread is directly related to FRISK. No way around that. In case FRISK isn't happy the way Bontchev represents FRISK in this thread, FRISK does have a problem at hand.

    regards,

    paul
     
  18. lucas1985

    lucas1985 Retired Moderator

    Joined:
    Nov 9, 2006
    Posts:
    4,047
    Location:
    France, May 1968
    Rescue disks?
    Critical thinking should expose the enthusiast user, the fan, the fanboy, the supporter, etc
    Also easy to spot (IMO)
    I never heard any AV vendor calling IBK a simple VXer. Why did you (the AV industry) allowed him to test your products and become a trusted person among end-users if he is merely a VXer (not my thoughts)?
    That's what I'm thinking. The outcome of this might be very good ...... or very bad.
    Indeed.
    I always have this post in my mind. The detection (or lack thereof) of a sample doesn't say a single thing about that sample and/or the scanner.
     
  19. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    You know, the sad part of a thread like this is, there are no winners. In the end, was there really anything gained by anyone from it. I am to blame like others here. I only wish and hope the pieces can be put together again.

    We all come here to learn and educate and today all we accomplished was to mutilate.:'(

    Not a good feeling if you do a gut check.
     
  20. larryb52

    larryb52 Registered Member

    Joined:
    Feb 16, 2006
    Posts:
    1,131

    let's not bring my gut into it, I like cookies & not the ones the computer uses ;-) , sorry trying a little levity here....sorry for off topic...
     
  21. EliteKiller

    EliteKiller Registered Member

    Joined:
    Jan 18, 2007
    Posts:
    1,138
    Location:
    TX
    AV-test.org (Marx) is also considered the "holy grail" when it comes to malware testing. The latest test included over 1 million samples, and all but 8 vendors scored above 90%. Since the top performers typically mirror the ones @ AV-comparatives is it safe to assume Marx is using junk files as well?
     
  22. lucas1985

    lucas1985 Retired Moderator

    Joined:
    Nov 9, 2006
    Posts:
    4,047
    Location:
    France, May 1968
    I hope that too, but not as part of damage control.
    I always try to find educative material, even if it's hidden at first sight.
     
  23. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    Keep in mind people, it is one thing to have a civilized debate. It is totally another thing when we start destroying peoples lives and credibility without any frigging lick of credible evidence produced today.

    Sorry, just cant do that, it just isnt right.
     
  24. Bob D

    Bob D Registered Member

    Joined:
    Apr 18, 2005
    Posts:
    1,234
    Location:
    Mass., USA
    Not at all.
    A widely held belief is being challenged.
    People (a lot more knowledgable than most of us in such matters) are debating their viewpoints.
    That's what a "forum" is!
     
  25. Frisk

    Frisk AV Old-Timer

    Joined:
    Jan 28, 2008
    Posts:
    31
    Location:
    Iceland
    Well, as far as I am concerned, Vesselin has been around long enough and has enough of a reputation in the AV industry to have an opinion of his own, independent of any "official" F-PROT company line.

    The thing is of course that we, (just like any other AV company) have sometimes been very unhappy with various AV tests. There is nothing new there. There have been cases where I have been rather upset at VB and regarding some VTC Hamburg tests, I was very, very tired of all the non-replicating garbage files included in their test set - garbage files that we did not pick up (for example because they were corrupted or truncated) but some other AV programs detected. The sad part was that they really wanted to do a good job, but did not have the resources to do so. This problem has actually gotten worse as the number of samples has increased.

    Just one example:

    As you know AV companies receive samples from VirusTotal, Jotti and others. Now, on an average day we may receive several thousand different samples. A substantial part of those files are non-malicious, but we receive them because some AV company generates a false positive on them. The best example is of course the Fenomen game downloaders (reported as Trojan.OnLineGames-65 by Clam and as Trojan-PWS.OnlineGames.BCA by Ikarus), but there are many, many others.

    Testers face the same problems as the AV companies - to "weed" the trash from the malicious files, and that requires quite a bit of work to do properly - you need several people doing nothing else than analysing potential malware.

    So, it may not be surprising that there are "junk" files in collections used for testing, but the "quality" of the collection is only one of many factors that determines the quality of the test and how "interesting" it is for any particular company.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.