Dr Web and AV Comparatives

Discussion in 'other anti-virus software' started by jrmhng, Feb 3, 2008.

Thread Status:
Not open for further replies.
  1. 031

    031 Registered Member

    Joined:
    Sep 5, 2007
    Posts:
    187
    Location:
    Bangladesh
    In the last three ondemand tests f-prot only missed 84 samples out of 121077 macro samples . Am i missing something ?
     
    Last edited: Feb 11, 2008
  2. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    Well, Blue, it's the AV professionals job to try to apply control protocols that apply to their job. I don't work in their sector, so i am not supposed to solve their problem via internet.

    As far as i am concerned, even a post-test control, would be beneficial for us users. Meaning, we , as well as the testers and vendors, would be in position to have an idea of the credibility of the tests of a certain lab. It would be a first step. In most other professions, even the labs get controlled for their efficience. Who controls the AV testing labs on whether they do their job as advertised?

    I am certain, that if the interested parts, that know better than me their job, get interested in a way of applying already existing ideas in their field, they will come up with a method that will avoid disputes over results.

    Ok, the vendors just want the tests as pubblicity, so all they care is score 99%. On the other hand, the end user, cares to have an index of performance on real life use. They can't go on forever with a tactic of the type "Here's my malware samples people (gathered according to my beliefs, no need to know how), i turned on the scanners and here's what happened".

    This isn't representation of real life performance. It is representation of his sample and his sampling bias.

    Anyway, i just can't accept as logical to go on like this, with accusations from both parts, where none can prove the other is at fault. I also can't solve their standardization and quality control problems for them, since i have no relation to their professions, but if all jobs that do testing have resorted to certain protocols , so must they. Otherwise you can't speak of realistic results,but more about charlatanism. This also explains, why a magazine gives first the A antivirus, another the B antivirus, or you see often russian tests that steadily put the russian antiviruses at high ranks etc. They don't necessarily LIE. But it depends on the sample they used. A more Russia-oriented pool of samples, is more likely to give russian avs as winners.

    Until they make a testing protocol that will make testers and vendors agree (and hopefully end users too, that don't care about pubblicity but on having the 99% success in real life PCs), i will continue believing only at Jotti's results (those that don't look like false positives).
     
  3. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
  4. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    I think you are, though it's on Mr Bontchev's part of providing flawed commentary than any fault of your own.

    Looking back at test data, F-Prot has always scored spectacularly well on macro virii, even achieving 100% detection rate during the Febuary 2007 on-demand comparative. If the statement is true that only missed samples are sent to vendors and Frisk have only seen the samples they did not detect, it then must be questioned that, for all his personal insults, whether Mr Bontchev will produce any credible evidence to back his standing, or at least explain this strange logical loophole in his remarks.
     
  5. dvk01

    dvk01 Global Moderator

    Joined:
    Oct 9, 2003
    Posts:
    3,131
    Location:
    Loughton, Essex. UK
    As we have seen from some comments here, many readers are unable to distinguish between an employee of an Antivirus company giving his personal opinion about the results and assume he is speaking on behalf of the company and stating company policy

    Most, indeed I would say ALL AV companies will have approved spokespersons who can speak on behalf of the company and state categorically what is or what isn't Company policy

    It is difficult for any employee no matter how knowledgeable he or she is or how well respected in the industry to make a statement especially in this very controversial field of testing without it being misinterpreted as company policy & not his/her personal opinion


    Inspector Clouseau is one of the the approved spokespersons for Frisk here so he has to state that Vasellin's opinions are Vasselin's opinions & not company policy

    And for the same reasons he is unable to give his personal opinions on the subject as he will be misinterpreted as speaking on behalf of his company & not as on behalf of himself
     
  6. 031

    031 Registered Member

    Joined:
    Sep 5, 2007
    Posts:
    187
    Location:
    Bangladesh
    Thanks solcroft . This thread is really interesting . Whatever i am still in the dark.......
     
  7. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    My impression is, that Frisk is playing a "Good cop, bad cop" game. Bontchev being the bad cop. :D

    In my eyes, it is *obvious* that FRisk doesn't reject entirely Bonthev's opinion, otherwise, why would Frisk skip the next AV-C?
     
  8. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    In the end I think they are doing themselves more harm then good. Especially with the theatrical performance displayed by bontchev yesterday. I mean this is a professional.:thumbd:
     
  9. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    Their downfall was when they decided that they wanted to quote figures to trump up their case. Apparently they seem to think that nobody on these forums is capable of reading reports or performing elementary-school mathematics.
     
  10. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    I think it's enough now, we are not in WWII.
    It was our decision to SKIP this year AVC. That Vesselin wrote here "a bit too much straight forward" from his PERSONAL opinion well sorry for that on Behalf of FRISK Software. I think you can stop quoting vesselin and asking him again because i have serious doubt's that he ever will reply here again.

    So folks, back to topic pls.
     
  11. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    Thanks IC, that is good enough for me.
     
  12. jrmhng

    jrmhng Registered Member

    Joined:
    Nov 4, 2007
    Posts:
    1,268
    Location:
    Australia
    If vesselin has a personal opinion then he should be free to have it and not be linked in F-Prot. Just because you work for a company doesn't mean that you are always speaking on behalf of the company.

    Frisk did not gag vesselin did they?
     
  13. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    My remarks are directed at the person or entity who made them. If those were indeed Mr Bontchev's personal opinions, then I see no need or obligation for Frisk to apologize for them - that responsibility should, as is proper, fall squarely on Mr Bontchev. As the facts stand right now, there seems to be two possibilities. One is that Mr Bontchev is indeed right and the AV-C testbed is full of crap, but if this is true, then F-Prot detected 99-100% of those crap and Mr Bontchev would not possibly be in possession of the evidence that he claims his stand was based on. The second possibility, which seems far more likely to me, is that Mr Bontchev was not making very good use of his supposedly superior intellect as he delivered his barrage of personal disparagements at IBK, which contained more holes than President Bush's patriotic war speeches, and in the end turned out to be just as laughable.
     
  14. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    No, their problems are not yours to solve. However, if you advance a standardization effort as practiced in other industries, I would hope you know at least some of the pragmatic constraints that have to be satisfied to make that work, and a large one is that the technical landscape is not highly fluid over the period during which the standards are developed.
    I won't go into details, but I was recently involved in commenting on a standardization effort that passed muster according to those involved in vetting the protocols that, unfortunately, the assembled group were not technically equipped to handle. To be perfectly candid, they got it wrong in a major way from a measurement perspective. The result is that the standard is OK for some issues, but can be highly misleading for others, and that didn't need to happen. At this point, it is too late to adjust. The end result is that the measurement tells you a lot less than it could and if you don't delve into the nuanced technical details, it can be flat out misleading. Standards help, but the right folks have to be in the room when they're formulated.
    Correct, and real life performance in my hands depends on any sampling bias embedded in my usage profile. If you want to look at sampling bias - this is where it is at - in the end users hands based on the malware population that they will actually be exposed to. This bias overwhelms all others (unless, of course, you're one of those users exposed to a million pieces of malware every few months - and based on some postings here, they probably exist).

    I'm not sure why you believe this is any less flawed than some of the tests we've discussed above. Objectively speaking, it is even less controlled at the end of the day.

    Blue
     
  15. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    I apologize for the apologize o_O
     
  16. EraserHW

    EraserHW Malware Expert

    Joined:
    Oct 19, 2005
    Posts:
    588
    Location:
    Italy
    :D Would someone else apologize too? :D
     
  17. dan_maran

    dan_maran Registered Member

    Joined:
    Aug 30, 2004
    Posts:
    1,053
    Location:
    98031
    I have been watching this thread for the last few days and I have to say it has been one of the more interesting ones since the "My AV is BIGGER than your AV" threads of yesteryear.

    On a serious note though; I understand where Vesselin is coming from and can say that sometimes I wondered about IBK's "sorting" and clarification of what is actually malicious. However, I will not go any further becuase IBK DID help me sort out some of the crap I had on hand and I thank him for that and his testing.

    What I am trying to say is both parties are right in my opinion and until this is sorted out in a "standardized" manner, take the auto industry for example, then we are all "up s*** creek without a paddle".
     
  18. Fuzzfas

    Fuzzfas Registered Member

    Joined:
    Jun 24, 2007
    Posts:
    2,753
    You can't of course predict the malware population that the end user will be exposed according to his setup and use. But, that's why statistics exist. Statistically there is even a prediction on what chance you have to happen in an airplane accident. But not all air carriers have the same safety level. Still, available statistics exist. It's been some years since i have studied statistics, but they have an answer for almost every problem. You can even make a user profile that mostly comes to the majority of users(ex. IE user with activeX enabled). Any attempt IMHO would better than the current situation. Because otherwise, what do these tests offer? There is little to none quality control. What does the end user get other than pubblicity for the vendors that score best? Who controls that what Bontchev said about vendors adding "missed samples" as they are without checking them is true? Who controls that the samples represent malware representating all areas? If i am a Malaysian user, will a European's magazines sample be representative of the distribution of malware in Malaysia too or a Malaysian magazine will have more representative samples? And if so, why doesn't anyone give details on how he collects his samples. I mean, the criteria? If the sample isn't representative, shouldn't there be a warning to the reader? This would explain why Russian tests have russian AVs excell, german magazines have german avs excell and so on. I guess, i can have a friend of mine with computer science degree make a test that using more Asian samples, can make PC Cillin come first... As long as nobody impartial controls your samples, you can pretty much come up with any result you want.

    Simple, because between believing 99% success and Jottis's , i find it more logical, based on my personal experience, to believe that there is something wrong with the 99% success and not with Jotti's that shows more misses even in cases that most AVs detect the malware, so most probably it isn't false positive. Of course, Jotti's isn't accurate either, because again, we have no quality control. Yet, i believe that saying that AVs "score 99%" is only a bad joke for the end user and useful only to AV vendors. That's all. It's not scientific reply at all, because i see nothing scientific in av testing. But it's an empirical reply. For me, with the current way tests are performed, and i mean in general, because every month there is always a magazine with "the AV shootout!" , you can't talk neither of science nor of professionalism, other than that someone pays for the tests, so yes, it becomes a profession.

    I mean, AV testing, isn't the only sector where there are apparently huge variables that can't be predicted. Take medicine for example. They have a probability about everything. Getting flu in winter, having cancer over 60 etc. People aren't the same. They don't have the same immunity system, nutrition,the same chronic diseases, they won't go to the same places, yet, since they want to call them scientists, they use statistics and standardization, to make a probable result (not absolute, but probable and they give for each prediction their margin of error, measured with standard deviation AND explain where and how the population-sample was selected and why it was considered representative). If AV industry can't do the same, because "it's too complicated", well, pitty for them, i will continue reading Jotti's and laugh at the 99% success tests.


    Regards
     
  19. lucas1985

    lucas1985 Retired Moderator

    Joined:
    Nov 9, 2006
    Posts:
    4,047
    Location:
    France, May 1968
    Be aware that Jotti:
    - Uses Linux-based scanners, which often are less capable and/or outdated compared to their Windows counterparts.
    - Uses settings which aren't know by us. Some AVs might have the paranoid mode switched on and others might be skipping PUPs/riskware.
    - Involves only flat file scanning and most of the big AV vendors are starting to incorporate other forms of detection/prevention (behaviour blockers, sandbox analysis only at execution, IDS signatures, buffer overflow detection, etc) into the AVs/suites. For example, ESET has heuristics rules/algorithms which only work in the web scanner (i.e. they're not available in the real-time and on-demand scanners)
     
  20. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    IMO it is not exactly black-and-white. Plus a lot of the things said on this thread just do not add up. I doubt you are going to have anyone comment on the quality of AV-C's test set, but you can see that apart from Frisk and Dr.Web everyone else decided to stay (even Panda, which purportedly had backed out in 2005 for similar reasons), even if the prices did rise substantially.

    So far neither Dr.Web nor Frisk produced any evidence about their claims. On the other hand, AV-comparatives did release a paper about the impact of corrupted files on their test set.....:doubt:

    Yeah, but AFAIK there might be another reason too.
     
  21. RealResults

    RealResults Registered Member

    Joined:
    Mar 8, 2006
    Posts:
    43
    I think everything you have stated above is a total cop out and creates no accountability. Furthermore, those of us who have been around a while knows who works for which companies. Additionally, I think Inspector Clouseau is completely capable of responding to my post. I do not believe he needs you to communicate on his behalf. If he chooses not to respond that is his prerogative but in my opinion irresponsible based on the seriousness of the allegations posed by this thread.


    Of course this is not WWII. Serious allegations by experts have been made in regard to the validity of the AV-Comparative tests. You and a few others are the only qualified experts that can give us lay persons intelligent insight into these matter.

    But for whatever reasons, not only do you refuse to provide us with your opinion, but it is clear you and/or FRISK have also put a gag order on Vesselin. Therefore, we are left in the dark because the experts refuse to address the issue.

    You then tell us to “get back to topic pls.” This is the topic. The validity of the AVC test and whether or not the samples, or a portion of the samples, are “crap.”

    How else are we, lay persons, who frequent this forum supposed to reach any conclusions without guidance from the experts? Are the experts afraid of future retaliation from Andreas Clementi at AVC should anything negative be said? Or are you afraid of the potential threat of litigation?

    Frankly, I still do not understand why the experts fear providing us with competent opinions. Without those opinions then there is no “topic” to get back to (with the exception of bickering from amateurs).

    From what I gather, you are highly respected within your professional community Inspector. I still do not understand why you do not have the courage or are unwilling to take a stand and provide us with your professional expert opinion.

    My hat is off to Andreas Clementi. Evidently the pen is mightier than the sword. Or I should say, his AV-Comparative tests have put him in a powerful position where only a couple of qualified experts are willing to challenge his tests. Actually, now that Vesselin has officially been silenced by FRISK, it appears Dr.Web is the only AV company willing to express their opinions in regard to AVC.

    Frankly I am disappointed FRISK has taken the position they have. Hopefully we will get a response from an AV expert from another company who has the courage and qualifications to assert an expert opinion on the validity of the testing methodology of AV-Comparatives.
     
  22. solcroft

    solcroft Registered Member

    Joined:
    Jun 1, 2006
    Posts:
    1,639
    RealResults, your ass-kissing skills are superbly impeccable.
     
  23. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    Hey, I resent that remark. Fanboy, yes. Amatuer, no. :cautious:
     
  24. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Folks,

    I provided enough warning regarding gratuitous comments of other members. If you can't adhere to that admonishment, move on. As for this thread, it's now closed.

    Regards,

    Blue
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.