dr web 4.44 scan speed

Discussion in 'other anti-virus software' started by Banshee, Sep 20, 2007.

Thread Status:
Not open for further replies.
  1. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    @BZ, thank you for the calculation. However, our concern is not there. I am sorry you guys don't get the point. I will try to explain it again.

    1. We believe that Dr.Web has missed some (hundreds, thousands) of bad files. We are ready to improve our detection - and we have the material to do it.

    2. In the results available to the public we see accurate figures (I don't mention the "few"/"many" words used in the proactive tests before). So, we see, for instance, that there were 44 410 macro viruses submitted to the test. Symantec missed 12 out of them scoring at 99,97%. I am really impressed at this performance but the question comes - what if in fact the missing 12 are NOT macro viruses? Then, the happy 100% would go to Symantec! Here I come to another question: why is the figure 44 410 there? Do we take it for granted? Are they CONFIRMED macro viruses - and who confirmed them? If they are not confirmed - then let us indicate "believed to be macro-viruses". Don't you think the value of the test will be changing then?

    Please remark that no explanation to the figure 44 410 is given in the table - what exactly it shows leaves us at guesses. The same can be said about each category, leaving alone "other malware" which is really hardly interpretable.

    Actually, the answer is given in the Disclaimer part of the report. No guarantee is given about the correctness and completeness of the tests (see the PDF file). But the figures in the online report are figures - and now we are discussing them.

    @BZ, once again, I would point out: any calculations are fine, but we are on a very uncertain ground here. On the other hand, the impact on business is very accurate. I have never seen anybody writing "according to AV-comparatives tests, conducted with no correctness and completeness guaranteed...".
     
  2. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    DrWeb says "We have done our best in order to protect your computers and information from all kinds of known and unknown viral threats."
    Don't you think it would change the value of your product if you would admit that the best you did means that still tousands of real malware samples are missed or that your product is more likely to give a false alarm compared to some other products? What the "have done our best" means just leaves the users at guesses.
    We are talking about the reached levels, which is what peoples look and have to look at, not about the bean-counters.
     
  3. Blackcat

    Blackcat Registered Member

    Joined:
    Nov 22, 2002
    Posts:
    4,024
    Location:
    Christchurch, UK
    Looking forward to this. Will this be in the very near future?
    Can you expand on this?
     
  4. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    IBK, If this is all you can say - I cannot comment it. What we are doing - is our business, we can miss a lot of things that never appear in the wild. But this does not affect your business in any form. When we miss something, we immediately anlayze it and add to the base. On the contrary, your business does directly affect all AV-companies - and you take no responsibility for what you are doing.

    By the way, do you mean to say that all that is submitted to the test is a real malware? Can I take it for granted? Please answer this question - this is very important for us.
     
  5. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    IBK, If this is all you can say - I cannot comment it. What we are doing - is our business, we can miss a lot of things that never appear in the wild. But this does not affect your business in any form. When we miss something in our daily work, we immediately anlayze it and add to the base. On the contrary, your business does directly affect all AV-companies - and you take no responsibility for what you are doing.

    By the way, do you mean to say that all that is submitted to the test is a real malware? Can I take it for granted? Please answer this question - this is very important for us.
     
  6. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    Since we have at last the missed samples from Andreas Clementi.
     
  7. Blackcat

    Blackcat Registered Member

    Joined:
    Nov 22, 2002
    Posts:
    4,024
    Location:
    Christchurch, UK
    But what about the missed samples from previous tests? I presume you were also sent these?

    Overall, Dr Web is the slowest vendor in adding these missed samples to their database. So I assume you will take the same amount of time to add these present samples?
     
  8. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    Did you miss the passage where I said that we will make public the amount of garbage that was in the August test-set and what impact it had on the results? I think everyone knows and will have no problem to say that any large set of malware is not free of garbage. We get lot of stuff submitted and lot of that stuff is garbage - and nearly all of the garbage is sorted out. So far we found 437 damaged files in the 808000 files, which could not be recognized as garbage by automated tools. I am sure we will find some more stuff and all which we find or that vendors (like you) will report us and will be confirmed by us to be garbage will be noted in the report of January along with its impact to the results, and if needed along with our mea culpa. That's all we can do for being transparent. All you can do is to add the real malware and if you really want to improve the tests (or even if you just want to dismiss the tests) you can report all real garbage you find in the DVD I gave to Daniloff.

    well, as this is not a test based on the wildlist, the lot of things you miss has its impact on the results, which is why you scored standard and not advanced or a+.
     
  9. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    I am happy you said this.
    I still can not understand what makes you publish the results based on the uncertain collection. When you produce a figure - it looks very accurate. And most people look at the figures, %s. You will probably make amendments in January - I am sure you will. But the bad impact occurs today, it will not be annulated in January.
     
  10. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Severyanin,

    The calculation is a tool to simply crystallize certain points of discussion. For me, the best outcome of these tests is not necessarily 100% detection, but a balance between run performance and detection. I realize that all too many readers focus on numerical differences in the tables provided that likely fall below the intrinsic noise in the determination, however the tiered ratings provided do mitigate this tendency. Personally, I believe the soon to be released V4.44 Dr Web, with it's current detection performance, strikes a very reasonable balance. I use it and I'd recommend it to virtually anyone. As I've noted elsewhere, it's a solid product, a solid product that can be improved.
    Excellent, that's exactly what one has to do.
    In a macroscopic examination of the test results, the categorization might be an interesting nuance. As a consumer, the detailed categorization is irrelevant to me. Potential issues of valid malware vs. junk files is a detail that captures my attention, hence the calculation that I performed to provide some indication of the scope required to materially impact the results. The change required is rather substantial.
    This is actually a somewhat disingenuous comment. Are you trying to make the point that you believe, in broad strokes, that the final results are completely compromised?
    I'm quite aware of the potential impact that these results may have on your business. It may be immediate and quite real. On the other hand I would seriously question whether the mass market even knows that this test exists. I would imagine that lead adopters who interface to the mainstream market do know of this test, what it does and does not imply, and provide the typical advice you see offered here - which is to develop a palette of options given broad criteria and advise the customer to trial within that palette and make a selection which best fits their personal needs.

    Of course, when you dismissively note that
    you really do start to erode the authority of your technical position given the sample addition profiles exhibited in the periods between successive on-demand tests.

    The ground here is likely firmer than you are willing to acknowledge. We are discussing the results of the 8th on-demand test, with results stretching back 4 years. Over that period, I'm sure that there has been ample opportunity to raise and address technical concerns that may have emerged. Obviously, if you believe the results are untrustworthy, you can either work to encourage or develop remedies or step away from participation.

    It had been previously mentioned, and reinforced by comments made above, that the issue of junk file content is currently being directly assessed. If you believe this is a serious issue, I assume that it is based on quite firm information developed in house based on past test results. Given the obvious importance that you've placed on this specific issue in this thread, have those details been communicated to the testing group? That would seem to be the obvious place to start the entire exercise.

    Blue
     
  11. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    If they are not the ones that bother our users, I am sure it will take a wile.
    The alternative: make them detectable tomorrow, by using "automated tools".
    Our Lab will never do that, though.
     
  12. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    Yes, we do communicate the data to the right person. And we are trying to have as much attention to the collections we receive as possible.

    I would prefer to wait for the report from the Lab about the tests on the last collection.
     
  13. Severyanin

    Severyanin AV Expert

    Joined:
    Mar 19, 2006
    Posts:
    57
    We are speaking from different positions, probably. For us - though you may smile at it - the results are compromised when we find a single junk file in what was told on the website to be a virus. Or a virus in what was told to be a false alarm. For the public, of course, thousands and tens of thousands of missed samples always prevail. They don't bother to ask what those samples are (in the wild viruses, junk files, dumps, old viruses etc.). But the difference is really there.

    The collection on which the test is based contains files believed to be bad. That is all. But I am still here with you because we are speaking about the AV-comparative test. Because some people here judge the product quality by the test results. If you reported me something that bothered you on your computer today, you would hear nothing from me but apologies - then it comes directly to our job. It is the last thing I want to hear from the users who trust us - that we miss a real malware on their computer.
     
  14. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    You're quite right. When you get down to it, the perfect AV has a signature base of 1, and that's of the next piece of malware that the owner is about to be exposed to. At best, this is a product that would miss all but 1 sample. That's a state which is not realizable in practice. Stepping back a bit, missed samples which the user will not be exposed to are a hypothetical issue. The one change that has been occurring over the past few years is that the increasingly extensive and hastening connectivity between everyone. Operationally means that we are tending to be exposed to the same pool of malware. That changes the scope of coverage needed, and perhaps how one goes about developing that coverage.

    Let me put it this way - I'm one of your paying customers, so by definition I trust your product. I've not seen anything - even with the current test standing as is - which suggests that trust is misplaced. Like all of us, improvements can be made. It's really a question of how to best spend limited resources to make the improvements that matter most. There have been some obvious changes in V4.44 that do matter a lot to me as a user and those changes have been for the better.

    Blue
     
  15. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    im sure improvements can be made by both parties, but people should not be to quick to judge.

    ive certainly realised since this last test, that drweb has gone out of favour with quite a few people on here.... people who obviously just look at the percentages, which is a terrible! way to judge an antivirus

    Severyanin is in a perfect place to make any improvements, or to take any comments/improvements and then to try and implement them, or to totally dismiss them as im sure he will :)

    im sure when i have any, i will dish them out :
     
    Last edited: Sep 23, 2007
  16. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    Test detection percentages are the singular focus of all too many people here, which is unfortunate. How that translates into market performance, I have no idea.

    As for Dr Web going out of favor...., programs move in and old of favor with users on a daily (hourly?) basis. We've had threads here in which current top performers in detection have fallen out of favor in some circles for specific issues unrelated to detection. That obviously presents opportunities for the remainder of the market, but it's up to the vendor to understand the current market dynamics in order to capitalize on that opening.

    Blue
     
  17. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    yep, its a sad world we live in.

    seen as this started to be about scan speed, i will continue it :)

    @Severyanin - why does drweb have a slow scan speed? I know it adds many packers etc, but surely others do aswell and their on-demand is much quicker than Drwebs, it doesnt really bother me, but it does to some.

    will their be any changes to improve this in the near future?
     
  18. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    Well, in today's world these two terms are used interchangeably - for the average user, a heuristic detection on a clean file bears pretty much the same "importance" as a signature detection on a similar clean file. In either case the average user is going to think that this might be malware, is going to quarantine or delete it. Next thing you know, his/her programs are not working properly due to the false detection. The effect is the same anyway. :)

    Okay, glad to hear that. :D

    1) What if in fact the 12 ARE macro viruses? :D
    2) What if the figure 44410 is indeed mostly accurate?

    Given that nothing is confirmed either way regarding those files, you cannot decisively say the test set is flawed. How are they classified? I do not know. Obviously there has been *some* basis - and based on this the files have been classified. While classifying, I guess one can assume a mostly correct classification - after all, a Zlob trojan is detected as Zlob and Trojan by all AVs out there (Dr.Web detects it as Trojan.Popuper, McAfee as Puper, etc. etc.). :)

    The "other malware" category consists of those kinds of malware which are not in significant enough numbers to be given their own category (flooders, nukers, exploits and stuff like that I think).

    It may indeed leave us at guesses - But it doesn't provide a conclusive decision either way. PR teams of AV companies show off VB100 awards. Now whats to say about the malware over there? Or AV-test? :)

    One can raise several questions, about anything. ;)

    In both positive and negative ways. And there is nothing intentionally being done to defame Dr.Web.....

    I don't know the answer to this question. I do know that no test set is 100% garbage free and there will always be some amount of garbage in each test set.

    But I will ask you a similar question: Does Dr.Web always detect only real malware 100% of the time? Can I take it for granted that Dr.Web never creates false alarms, or that it never detects malware in corrupt files at all?

    I know you have your doubts, and I also know that there is no decisive conclusion on the presence and impact of garbage files on the final result. So I am not sure why you are expressing disapproval and questioning the validity of the tests based on unproven theories.

    Would you rather have AV-comparatives say "do not trust these scores" and potentially let people have "blind faith" in assuming that Dr.Web has very good protection rates until something happens in January and all those customers get dissatisfied (assuming that the events turn out opposite to what you expect)?

    As such, a "Standard" rating is a very good score by itself, and Dr.Web can indeed be a very solid product, but to make assumptions and theories the way you are is going a bit far IMO.

    Judging from older posts by Dr.Web staff in this forum about AV-comparatives, this is not the first time the company has had a problem with this test. Of course, it didn't get so ugly back then.

    I am pretty sure 4.44 will do OK at AV-test.org. But I already have an idea of what may happen. Of course, I may be wrong. :)

    Of course, we are speaking from different positions. Taking a similar analogy, having a lot of FPs in any AV severely compromises its impression upon some users. You cannot really prevent the FPs from happening - you can only fix them when they are reported, AFTER verifying whether it is really a FP (for example, riskware type software).

    The same applies to this as well....If you find such files, you need to report them. :)

    On an ending note for this post, I want to say that I am a Dr.Web license holder (for now at least) and also hold a license for its "sister" (though it is independently developed) Virus Chaser. In my experience Dr.Web's scan engine is very thorough and scans very deeply. I believe the scan speed can indeed be improved. And I also believe Dr.Web offers decent protection, and the 4.44 version is a good improvement over its predecessors - now if you only got an encrypted quarantine working! :)

    AV-comparatives makes clear about the ratings achieved, and it also makes clear that even Standard rated products are worthy of use. And we know that vendors like Eset, which go into great lengths to verify malicious samples, also scored pretty good in the test. I am sure you have doubts, maybe not unjustified - but unless there is concrete proof of it, and unless there is a conclusive, definite explanation on WHY the test is untrustworthy, one cannot bash it. All I have seen so far is theories and the fact that Dr.Web's analysts found what they claim to be a significant number of files that are not really malware. There have been claims of various things, lots of paranoia but nothing definite.

    Until something concrete is seen to show why AV-comparatives' results are bad, it is difficult to believe many things....:doubt:

    P.S: To everyone who has sent me a private message over the past few days, please bear with me for the delay in replying, I am currently unwell and barely able to get out of bed. I will try to reply to your messages in the coming days. Thank you for your patience! :)
     
  19. Banshee

    Banshee Registered Member

    Joined:
    Nov 10, 2004
    Posts:
    550


    I personally do not think that drweb "went out of favor" just because of the percentages. I think it also has to do with the expectation you (CSJ) "created" and then the bubble went burst.

    Let me explain:

    Many people ask on the forum for advice.Some of them are completely green.


    Here you come with your suggestions that the doctor is great, that it is fantastic and this and that.Now what happened ? Some people believed it.

    Those who did not bother to investigate your claims/fantasies (have ur pick) and bought it eventually found out that the doctor is not what they thought it was.It is not a top tier. Simple. No big deal to some but a big deal others.

    It is like comparing a wheelbarrow with a ferrari. Makes no sense.


    Couple this with a few other things they did not like and boom.They ditched the doctor.

    That is why I told you to backup your claims.

    Thanks
     
  20. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    i can smell what your talking,

    im completly happy with my drweb and always have been, drweb has certainly matched my 'expectations'.

    you speak of claims and fantasies, but i speak no lies, if you can surely prove this, i will admit im wrong.

    many av's can be bashed you know, kaspersky with its chkdsk problems, avira with its program and update problems, nod32 with its default settings with no advanced heuristics, what would detection be to the majority that dont tweak their settings, 20%? , because nod32 relys more than others on them.

    maybe IBK's tests should be done on default settings, avira might not flag alot of things, nod32 would be forced to turn on their advanced heuristics or get dropped from the test more likely. etc etc

    i could go on and on, but what you speak of, misguiding people and creating fantasys or wishes for drweb is absolute bullshit!

    ive stated the facts, the truth, you choose not to accept this because it shows drweb in a better light, sure you will say ive spoken such things to 'make' drweb show in a better light, this would be correct, i dont deny this, but if you can show me something that i have said to be a lie, or a misguided truth, please tell me.

    drwebs false alarm rate - overreaction
    drwebs detection rate - overreaction

    any who bases a detection rate on a test set of such a large test set that is not indivdually checked is talking out of their ass, and yes... your breath smells :)

    but all jokes aside,

    at least with small tests, no disrespect at all towards you IBK, they can be checked throughout, checking for code, its acts on ones computer, its removal properties etc.

    its good enough for me, it should be for anyone else who 'uses' drweb.

    like what?

    people ive recommended drweb to personally, are very happy with drweb, i assure you.
    I think it is infact you, who is making misguided accusations, i believe these to be your thoughts, and nobody elses.

    i believe Severyanin has also made quite a valid 'backup' of my claims.


    o_O
     
  21. Banshee

    Banshee Registered Member

    Joined:
    Nov 10, 2004
    Posts:
    550
    CSJ,

    ---
    i could go on and on, but what you speak of, misguiding people and creating fantasys or wishes for drweb is absolute bullshit! [....rest of the fiction clipped]
    --------


    I did not say that u on purpose misguided people.

    You prob spoke because u were excited about the product.And in your mind the product was absolutely great. What is true in your mind is not always true in reality.

    Look you even went as far as saying that the tests were setup for drweb to fail.

    This alone makes me think you are a bit unwell.


    You also did not realise that some ppl looked up to you.


    Remember that we were all green once and you know how easy it is to believe stuff.- It's not rocket science.


    To make a long story short.You like the software, use it.You want to say stuff say it.Make sure you back it tho.

    If you can't back it up don't say it.
     
  22. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    taken out of context, the past 2 pages or so from one av expert to the next is all about the same thing.

    it is, you just refuse to believe it so. ;)
     
  23. Banshee

    Banshee Registered Member

    Joined:
    Nov 10, 2004
    Posts:
    550
    Oops I missed a few,

    CSJ,


    l>like what?


    I think some left because they had problems with support being either rude or not cooperative...I am sure the are more reasons besides the obvious.



    .
    >I think it is infact you, who is making misguided accusations, i believe these >to be your thoughts, and nobody elses.

    BUt then again you also believe that drweb is a fantastic antivirus. There you go.
     
  24. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    any proof?

    support have always been fantastic to me, 3 minutes for a reply at the weekend, very informative too.

    who else can offer this?

    so people left drweb because of support, i highly doubt this.

    :D
     
  25. Banshee

    Banshee Registered Member

    Joined:
    Nov 10, 2004
    Posts:
    550
    CSJ,


    I think that this thread started to get interesting with IBK and a drweb "expert" discussing stuff. Why don't we let them talk instead of cluttering the thread ?

    You could also open a new thread about drweb so that u can rant over there.

    I really want to find out how this whole things pans out.so I'll sit and read.


    Ok. done.
     
    Last edited: Sep 23, 2007
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.