Dr Web and AV Comparatives

Discussion in 'other anti-virus software' started by jrmhng, Feb 3, 2008.

Thread Status:
Not open for further replies.
  1. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    Innocent till proven guilty: hmmmmmmmmm

    Let me state the facts of this entire thread then form your opinion.

    2-AV vendors pull out of testing by AV-C, Reasons stated are inaccuracy of some samples and fees charged. As far as the first assumption no proof but only conjecture, that may have validity based on the expertise of the person posting.

    Fees- AV-C totally within his rights to set forth any fee structure that he deems fair. Then it is up to the vendor to decide as to if the charge is equitable in their eyes and if they wish to proceed with being associated with AV-C.

    This is really all this boils down to. You have 2 vendors who have chosen and have the right to not be associated anymore with IBK. You still have some major players who currently at this time are obviously satisfied with testing and pricing structure.


    Did I leave something out. Oh yes, all the crap in between.:rolleyes: :thumbd:
     
  2. Bob D

    Bob D Registered Member

    Joined:
    Apr 18, 2005
    Posts:
    1,234
    Location:
    Mass., USA
    Dr. Web was one of the 8.
    I say we start another thread.
    "Dr Web and AV-test.org"! :)
     
  3. Paul Wilders

    Paul Wilders Administrator

    Joined:
    Jul 1, 2001
    Posts:
    12,475
    Location:
    The Netherlands
    Problem is, Vesselin isn't posting in this thread as an "independent" poster. He has been posting with "FRISK results" as arguments. Apart from that, this public board is not intended for a private dog fight. Our staff has been snipping quite a few very personal aimed insults coming from Bontchev aimed at Andreas Clementi. That does reflect badly on FRISK and such a dog fight belong on a private domain.

    That's a well known fact indeed.

    No argument here. Then again, if I'm not mistaken FRISK was and still is interested in (amongst others) their products being tested by av-comparatives?

    regards,

    paul
     
  4. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    Probably the most interesting thread i have ever read in this forum (AV section). Thank you all!
     
  5. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    IMHO, WE, here, are in no position to judge the accuracy or inaccuracy of the samples. Believing either side, is only a matter of who you trust, IF you trust any of the two parts. WE, never seen the samples nor have we tested them. AV-C, says the samples are good, so uses them in tests and gives the results. That's what WE read. 2 AV-experts, say there are some garbage files, that WE can't see either.

    Without being in position to verify either claim, you can only stay neutral or hope you place your trust to the right person.
     
  6. lucas1985

    lucas1985 Retired Moderator

    Joined:
    Nov 9, 2006
    Posts:
    4,047
    Location:
    France, May 1968
    I trust myself :D
     
  7. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    Wise man. :D
     
  8. dvk01

    dvk01 Global Moderator

    Joined:
    Oct 9, 2003
    Posts:
    3,131
    Location:
    Loughton, Essex. UK
    From my point of view and I am very much an amateur in this field but I do have a very,very basic understanding of malware analysis, all the current Antivirus tests are about as much use in the real world as a chocolate fireguard

    Taking it from a consumer point of view, I want to know that my Antimalware program ( please note I don't call it an antivirus as that is out of date and useless) to detect. block & prevent infection BUT also to be able to disinfect or remove what has got on before a signature was applied for it

    That doesn't happen

    The vast majority of Antimalware programs are quite effective at detecting & reporting & blocking download/install BUT woefully inadequate at disinfecting or removing modern malware and that is the problem with the current batch of tests.
    All they do is run a scanner against a collection of files, some definite malware, some dubious & some not malware BUT still involved in an infection

    What Some AV analysts who have posted in this thread seem to miss is that even though a file might not be malicious and is only an Icon or image or text or whatever, IT has still been dropped on the users system as part of a malware attack

    The user in the street wants their antimalware program to completely remove all the crud that has been installed with an infection whether malicious or not

    Take Zlob for example

    Most AV detect Zlob & block & quite a few actually delete the zlob itself BUT ignore the dropped files ( normally fake antispyware programs)

    The same with Vundo

    I don't know 1 antivirus that I can hold up and say, YES this one always deals with vundo because they don't and manual removal using specialist tools is needed

    The amount of times a victim comes to a cleaning forum saying their AV detected vundo, said it was clean but it still there after reboot is beyond belief and is the whole crux of this matter

    Until AV tests are carried out in a real world situation with a computer loaded with normal software that a user will have including P2P programs, messenger, email etc and infected by "normal " means like a user downloading a codec or file from P2P & letting it run
    Then an AV installed and attempts to clear up the infection without damaging the rest of the system or programs on it, the tests mean nothing to the Man in the street

    Many of the AV companies and their analysts are still back in the dark ages somewhat and still thinking of the traditional "file infector" virus & don't see anything else as a problem for the AV to deal with

    Modern malware rarely replicates in the manner of a virus and the bulk of the current disinfection tests are still based on how well an AV can disinfect the viral code from a file

    How many new "viruses" have you all seen in the last 2 or 3 years and how many other malwares

    I suggest the number of true viruses can almost be counted on 1 hand compared with the explosion in trojans, spyware, adware & worms etc BUT most tests don't seem to take the disinfection or removal of them into account
     
  9. larryb52

    larryb52 Registered Member

    Joined:
    Feb 16, 2006
    Posts:
    1,131
    you nailed it, I had vundo on my wife's pc, 3 times I thought it was gone 3 times it cam back, 5 programs later I'm still not sure...
     
  10. De Hollander

    De Hollander Registered Member

    Joined:
    Sep 10, 2005
    Posts:
    718
    Location:
    Windmills and cows
    Because Frisk stopt using the services of AV-C, where can I find a test that meets the "standard" of Frisk?
     
  11. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    yes, im sure the top-scorers just recieve the dvds from IBK and just add them all to their database.

    20 minutes work, simple and scores high in the test.
     
  12. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    ive heard the same comments from drweb too.


    i could quote the rest, but it seems almost offensive o_O
     
  13. Paul Wilders

    Paul Wilders Administrator

    Joined:
    Jul 1, 2001
    Posts:
    12,475
    Location:
    The Netherlands
    No one stated Frisk stopped using the services from av-comparatives; they may well skip the upcoming one.

    regards,

    paul
     
  14. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    I am not capable of testing a security application. Therefore, I rely upon tests conducted by the various "organizations." If a vendor's product is not tested, then I have no idea as to its capabilities.

    I trust AVC more than any others, but would consider other test organizations, and would look at how the product performs among the various testers.

    If I could not see fairly consistent test results by several testers I would not even consider purchasing the AV.

    Although I do not trust the magazines to do objective tests, if results among them were generally consistent that would be enough. I bought my first AV, Bit Defender, as a result of PC Magazine's test results, and did not go wrong.
    I generally ignore magazine rankings.

    My conclusion is that nearly all of the AVs are plenty good enough for those of us who are safe surfers. I suppose it is possible that some attacker could get by some of the lower rated AVs, but I really doubt that it will happen. I guess ignorance is bliss. Although I use one of the best, I would not be worried if I used AVG, Avast, Dr Web, or F-Prot along with a software firewall, but would want to add something like SAS

    The various arguments as to the validity of the tests, and the vendors objections are not helpful to folks like me. I want to see tests anyway.
    I am glad that there are those here who have the skill to test, and assess the various tests, but I am not one of those and will continue to rely on tests in which I have some degree of confidence.

    I think it is a mistake for the vendors to opt out of tests they do not like because they receive lower ratings. There are always excuses that can be used to justify opting out. But in the end it is "Out of sight, Out of mind."

    Regards,
    Jerry
     
  15. bellgamin

    bellgamin Registered Member

    Joined:
    Aug 1, 2002
    Posts:
    8,102
    Location:
    Hawaii
    Is there an "AV testing/ratings game" to be played, I wonder?

    Game activities -

    1-Tester accumulates large database of stuff, some of which is malware & some of which is (uhhh) "crap."

    2- Tester tests AVs then sends their proponent companies "missed" samples (some part of which is "crap?").

    3- AV company "A" plays the game & makes (computer-generated?) signatures for ALL the missed samples (including the "crap?"). AV company "B" doesn't follow suit.

    4- On the next test, company "A" scores high; company "B" scores mediocre.

    In a 2-player game (A-type companies versus B-type companies) I suppose "A" wins. However, there is a third player in the game -- namely, the people (like me) who buy A-type AVs based on tests. Do we win or lose by betting on the game's outcome, I wonder?

    "Consumer Reports" does not charge fees to products in order to test them. Instead, "Consumer Reports" charges fees to the people who want to read their test reports. Is there such a thing for tests of security apps? Could there be (that is, would enough people pay for independent reviews of software"?)
     
    Last edited: Feb 11, 2008
  16. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    Previously i had written, that the sample must be representative and random.

    Now add, the QUALITY CONTROL, which is all what this dispute is about. Who controls if the malware used in tests is real malware (assuming that the rest of the testing procedure is impeccable)?

    Ideally, there should be intra-lab and inter-lab control. Meaning, files being analyzed locally to verify their malicious behaviour AND exchange of data with other testing labs or the vendors too, that will compare their results on the same malware samples. The comparison, with documented explanation of why something is malware or garbage, will show who is at fault and he will need to rectify his methods. AV vendors could even put on a verification procedure the testing lab, each giving a number of samples with true or false malware and after watching if the testing lab will have false positives or negatives. Meaning, trialing the capability of the testing lab to identify real malware from garbage. Of course each lab should provide documentation as why the malware missed to be identified correctly is what the vendor claims to be and not what the tester isn't right.

    This is what happens in other professional fields, where testing is very serious job and mistakes, can cost very much, even lives.

    Regards.
     
  17. De Hollander

    De Hollander Registered Member

    Joined:
    Sep 10, 2005
    Posts:
    718
    Location:
    Windmills and cows
    I must have read this reply wrong, sorry about that.




     
  18. dvk01

    dvk01 Global Moderator

    Joined:
    Oct 9, 2003
    Posts:
    3,131
    Location:
    Loughton, Essex. UK
    Before you can even think of testing you need to decide what the antivirus or antmalware program being looked at is supposed to do or what you want it to do

    One of the big disagreements between testers & between some AV companies and analysts is about crud/garbage and non malicious files
    now in a pure AV any non replicating or non malicious file is garbage so should be rejected from a test set and shouldn't be detected by the program

    HOWEVER in the real world an infected computer will have all sorts of crud on it droipped by the infector, some malicious & some not

    That is what we have to decide first of all. Should the AV detect all that or not
    Take Zlob/vundo & other similar downloaders for fake antispyware programs

    Many AV will detect the downloader & what it feels are malicious files but won't detect for example the images that get popped up in taskbar or that replace your desktop background with an warning " you are infected trick" or the files that run them because they are not malicious just a minor( major in my book ) annoyance

    That is what is wrong with the testing, until it is decided by all players what should be detected and fixed there will never be an agreemenbt between the 2 sides

    from a consumer point of view, we want everything detected and fixed
     
  19. LowWaterMark

    LowWaterMark Administrator

    Joined:
    Aug 10, 2002
    Posts:
    18,280
    Location:
    New England
    Actually, you may have. Basically, they have not said they will never have anything to do with av-comparatives again, or that they won't use any other of their services. They have only said (so far) that they aren't participating in the next upcoming test. That was what Paul was pointing out.
     
  20. De Hollander

    De Hollander Registered Member

    Joined:
    Sep 10, 2005
    Posts:
    718
    Location:
    Windmills and cows
    It's perfectly clear to me now, thank you.
     
  21. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    Correct. They must at least set a basic standard as to what they are supposed to detect. One thinks that one sample must be detectable, the other thinks not, this is the infamous "research bias".

    One can google his heart out about research bias to see what it means and how it is dealt with in other professions and can try to find the correspondent part in AV technology.

    They must agree on what they are supposed to detect. Once they do, they must make sure there is no sampling bias , but real world representation of the type threats (this would need a statistical measurement apart and of course the results wouldn't be absolute , but with a confidence interval. This would make the sample pool statistically significative. Usually a confidence interval of 95% is acceptable. ).At the end of this measurement, you would have a theoretically statistically correct mix of the type of malware that exists in real life (ex. 40% trojans, 10% rootkits etc or even also according to the way they get delivered to the target). On what is representative sample of the single malware, i don't know how to judge that. I mean, in Spain, spanish writers must have a higher prevalence. In Russia the russian malware writers.
    Of course a lab would have to pay a statistician for this job...

    Then you have to deal with pre-analytical (sample preparation, AV correct setup), analytical (correct scanning procedures) and post-analytical (correct presentation of the results) errors.

    Then you have to do a quality control , to be sure, your lab is detecting what others are detecting too and what all should be detecting, and not just phantomatic malware, just because an incompetent employee is making errors of judgement.
     
  22. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    And I think they will eventually come back with things from both sides tweaked and improved.;)
     
  23. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Actually, there's a host of bias issues involved in these challenge tests, none of which are readily addressed, although I'd personally put selection bias as a likely more significant issue.

    One pragmatic point to keep in mind is that the controlled standardization protocols that you describe are not operationally workable in a situation in which the standards evolve much slower than the real challenge they attempt to depict - unless you wish to test on how well you've done in the past. By the time the standard testbed of samples is validated, the results of the test are likely to be irrelevant.

    That's not to diminish some of the caveats noted above, for they are real issues. However, readers should not lose sight that approximate results (be it a back of the envelop estimation, or an AV challenge test) can be quite useful as long as it's understood as approximate from the start and the results are viewed with that in mind. These tests are approximate, despite displaying significant figures galore.

    Pragmatically, I realize there is a disconnect here since most readers are either congenitally unable or unwilling to appreciate that 95% and 99% detected might actually be the same result. Perhaps that range is a lot wider, say 85 to 100% are objectively the same result. I also understand that, from a marketing perspective, being on the "wrong" side of that nuance is a commercially losing proposition.

    The only way through this is to provide result metrics that objectively portray the degree to which performance differences are measured. In a testbed containing x% disputed material (= genuine junk + "misclassified" samples + other), very simple arguments would show that results within ~ 2x% of one another have to be viewed as the same unless further analysis is performed. Even for very modest levels of disputed material, differences between results are quickly blurred. For example, by this argument, 6% testbed contamination would render all products examined in the last av-comparatives on-demand test, objectively speaking, indistinguishable.

    Blue
     
  24. RealResults

    RealResults Registered Member

    Joined:
    Mar 8, 2006
    Posts:
    43
    This post was one of the most interesting I have ever read on this forum. That is until Inspector Clouseau (an employee of FRISK) made the following statement about Vesselin aka bontchev (also an employee of FRISK): “Vesselin speaks *NOT* official on behalf of FRISK Software International.”

    So we have Vesselin and Severyanin (an employee of Dr.Web) who are directly challenging the validity of AV-Comparatives (“AVC”) stating the test set includes a lot of “crap.” Those are pretty bold statements.

    I do not understand why Inspector Clouseau would try and separate Vesselin from FRISK. Also, of those qualified as experts, why do you not provide us with your opinion Inspector Clouseau.

    I would also challenge Stefan Kurtzhals (who I believe works for Avira) to give his expert opinion on the quality of the AVC test set.

    Frankly, if those who work in the industry and know how to dissect a potential malware are not willing to state their opinion, then this thread to me is essentially dead and will end up being another boring argument back and forth between amateurs.

    Right or wrong, I respect Vesselin and Severyanin for having the courage to state their opinions. However, their opinions are a little suspect considering both FRISK and Dr.Web have both withdrawn from AVC. On the other hand, Dr.Web has essentially been making these claims for a couple of years.

    I challenge Inspector Clouseau and Stefan Kurtzhals, or for that matter anyone else who works in the industry dissecting malware, to step forward and provide us with your analysis of AVC and it’s testing quality. It would be informative to hear the opinion of an AV expert who scores high on AVC.
     
    Last edited: Feb 11, 2008
  25. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    I think most of what was posted today was pure,"skata.":cautious:
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.