AV-Comparatives Retrospective / Proactive Test May 2011 released!

Discussion in 'other anti-virus software' started by clayieee, May 25, 2011.

Thread Status:
Not open for further replies.
  1. MarkKennedy

    MarkKennedy Registered Member

    Joined:
    Jun 16, 2011
    Posts:
    19
    I believe in such cases it is incumbant on the vendor to explicitly state what their objections to the test/methodology are (this is how I got involved in this thread in the first place, to explain why Symantec withdrew). Each reader can assess the legitimacy of those objections on their own (as many have done with mine). But simply saying "I don't agree with the methodology" does not cut it for me, and I suspect, Andreas.

    Even the best test available today has certain limitations. Testing is easy. Good testing is very, very hard. A large portion of my time is spent on testing and testing issues, and I know the testers' pain. Compromises in methodology necessarily happen. Some of those change things a little, some can change them alot. For example, running with a network capture/replay system can change things a little. Manually downloading samples from an internal website to the desktop and double clicking them to launch can change things alot. And not re-imaging between samples can invalidate the whole test.

    The methodologies used are generally well defined, so these compromises can be seen, and where found lacking, be objected to.
     
  2. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    The way it was worded it sounded as if all vendors act this way, and I know for a fact this is not the case. If that were true neither AV-Test nor AV-Comparatives would exist as all AVs would have dropped out sooner or later. Which AV hasn't had a "bad test day"?
     
  3. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    lol, this was the case with panda, look now: they are participating in AV-C and they agree with the methodology
     
  4. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    You're absolutely wrong.
    Actually we weren't doing that bad at all in the 2004/2005 tests before we dropped out. And the fact we do participate does not mean we agree with the methodology. We do participate in AVC because of the Real-World tests but don't agree with the retrospective test methodology for the same reasons that Mark described (even though our product's results are very good in this test).
     
  5. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    To reinforce Mark's comments, everybody in the industry testers and vendors alike, recognize the fact that an on-demand or retrospective test does not reflect the real-life situations of millions of users. Even AVC and Andreas agree as is reflected in the OnDemand & Retrospective test PDFs.

    Its mainly because of uneducated users like some here at Wilders who are determined to treat AV tests and AV vendor relations with tests as a conspiracy theory that make AV test methodologies which have been outdated for many years, to remain active.

    I ask that you do everyone a favor, most importantly end users, and write your fav tester to drop any tests which make use of outdated or limited-view methodologies. This will allow testers to have more time and resources to concentrate on real-world tests.

    Edit: I'm not only talking about outdated methodologies by AVC. VB's RAP for example still tests without Internet connectivity.
     
  6. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    @Zyrtec: I do not comment other testers tests, as I have no idea.

    @Mark/Pedro: sorry if my rant sound too general even if I wrote in singular. I will try to keep more quiet in future to avoid doing harm due misunderstandings.

    @Pedro: to be exact, it wasn't Panda dropping out. But I do not want to comment that old stories anymore, esp. not publicly.

    @users: at the end of the year you can again vote for the tests you want to see (and which not). Instead emailing me, please express your vote in the poll (it will be like last year some 20 questions, ~5 minutes), which is easier to summarize. But you are free to email vendors if you want to see them in some tests (e.g. WPDT).
     
  7. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    maybe they want to test the vendor response, and not the cloud response (in which the reputation is mainly built by the users)
     
  8. bellgamin

    bellgamin Registered Member

    Joined:
    Aug 1, 2002
    Posts:
    8,102
    Location:
    Hawaii
    It's a good objective. A tester who throws slams at unspecified vendors thereby damages ALL vendors, & also damages the perceived objectivity of the tester himself.
     
  9. smage

    smage Registered Member

    Joined:
    Sep 13, 2008
    Posts:
    378
    And some users are persuaded that testing organisations are biased and not independent, so they perform their own tests and publish the results via videos on Youtube. This is the new trend these days, there are so many channels dedicated to security software testing and they have such a large audience.

    So vendors and testing organisations should better resolve their problems quickly, otherwise there would be more confusion and chaos in the security testing industry.

    Moreover vendors should please stop luring users by claiming that they can offer 100% protection, any test which show such results is useless. The VB100, protection from 0day threats from AV Test.org and some other sponsored tests are the greatest joke ever and they contribute in making users lose faith in testing organisations in general.

    Regards
     
  10. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    you know what is the purpose of the VB1OO?
     
  11. smage

    smage Registered Member

    Joined:
    Sep 13, 2008
    Posts:
    378
    Yes I know, it represents the basic standards required to be recognised as legitimate and properly functioning anti-malware solutions.

    This is why I do not like the way some vendors extrapolate on this result just for marketing purposes. Try to look at Mcafee and you will understand.
     
  12. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    I maybe dont understand all of this but, I trust in IBK to be the most professional of all the testing establishments out there. Is he perfect, maybe not, but I have know him for years and do know he puts his soul into doing this and that it is more then just his name on the line each time results are published.

    I know that he will spend extra time as needed to ensure for accuracy over rushing to meet a deadline. He doesnt have any, he justs knows how to test and ensure it is as accurate as humanly possible. So I really dont care to read threads like this as I made my mind up on his ability, and integrity, a long long time ago. So post your ramblings if you wish, but this gentleman deserves better from us.
     
  13. De Hollander

    De Hollander Registered Member

    Joined:
    Sep 10, 2005
    Posts:
    718
    Location:
    Windmills and cows
    So what's the reason to participate, if you disagree about the test methodologies.
     
  14. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    you talking to me, or someone else?
     
  15. De Hollander

    De Hollander Registered Member

    Joined:
    Sep 10, 2005
    Posts:
    718
    Location:
    Windmills and cows
    No, just a question in general.
     
  16. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,102
    Location:
    North Carolina USA
    No problem, as I stated I may not understand all of this thread but I do know when it is wrong to come to the table with a blank set of cards, which most here are doing.
     
  17. shanep

    shanep AV Expert

    Joined:
    Sep 10, 2008
    Posts:
    54
    Well said.
     
  18. firzen771

    firzen771 Registered Member

    Joined:
    Oct 29, 2007
    Posts:
    4,815
    Location:
    Canada
    well last time i checked i never saw any of the retrospective or ondemand tests claiming "this is how effective your product will be during normal use". I think of it like a benchmark. kind of like with car benchmarks for performance specs and fuel specs. Cars are tested and rated on a dyno (just a stand still power test on big rollers) for their HP and torque numbers but nobody said those numbers directly translate into a win or loose on a race track or when driving the car. in a race situation (or with the case of AV's, in 'real world usage') other factors about the car come into play as well like weight, tires, etc.

    Its how things are done, people like to know the benchmarks of their cars (AV's) regardless whether or not that directly translates into its performance. there are overall race track tests done for cars as well (lap times) which are kinda like real world dynamic tests.

    that doesnt make it wrong to test and know the benchmarks for the product, i think its up to the user to not be so stupid as to think the numbers directly = real world performance. hell, vendors do the exact same thing on the opposite spectrum, they always go around touting their own horn with their "100% on test A or w/e" that they plaster on their sites regardless if the test happens to be an all encompassing dynamic test or a static one.
     
  19. pbust

    pbust AV Expert

    Joined:
    Apr 29, 2009
    Posts:
    1,176
    Location:
    Spain
    I didn't say that. In fact I said the exact opposite. Are you trying to put words in my mouth or are you just not paying attention?

    Those different tests are all part of a "whole" test. If you think about it the Real-World tests (called Whole Product Tests by AVC) already does test the heuristic components (as well as generic sigs, behavioural engines, cloud, etc.). Many engines nowadays update our heuristics very often, in some cases every few weeks. So testing against a 4 or 6-month old heuristic engine is not only NOT representative of the real-world, it even cripples the product further.

    As I said above, if you are updating your heuristic engine every 1 or 2 weeks AND you have cloud-based heuristics, then it is definately a wrong benchmark as you're comparing apples to oranges. It's not a question of whether users are stupid or not, its a question of most people (even the most technically proficient here at Wilders) don't understand the technology behind an AV and therefore don't understand the impact of a specific test methodology, thereby coming to the wrong conclusions about what a test result actually means.

    Actually those tests that show 100% detection are precisely the On-Demand, XY100% or whatever tests which you are defending. Again you are contradicting yourself. If you don't like that then you should be promoting Real-World tests and not outdated tests. But first it looks to me like you need to make up your mind on what you want.... is it limited-view tests like On-Demand (which show 100% detection) or is it Real-World Tests (which vendors won't be able to show 100% detection)?
     
  20. Baserk

    Baserk Registered Member

    Joined:
    Apr 14, 2008
    Posts:
    1,321
    Location:
    AmstelodamUM
    For what it's worth, I've noticed that AV-Test.org uses only 29 samples for their 'Blocking of malware on- or post-execution'-part of the 4 detection tests.
    Eset Smart Security 4.2 scores a low 34% detection there, but only 29 samples seems like a rather small sample set to base scores on. link
    (Just a first-thing-that-comes-to-mind idea though; if anyone can tell me how such a small sample set is/can be representative, I'd be happy to learn why).
     
  21. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    this is only a reason why i cant trust completely in AV-Test evaluations
     
  22. toxinon12345

    toxinon12345 Registered Member

    Joined:
    Sep 8, 2010
    Posts:
    1,200
    Location:
    Managua, Nicaragua
    the VB100 award shows that the tested product provide you basic detection of the WildList while generating no false positives
    this list all the threats causing problems to users

    if such basic requirements are not guaranteed by the product, then youre more exposed to such threats or possibly exposed to fps

    most of vendors failed around 14-25 tests in this evaluation, McAfee, 23
     
    Last edited: Jun 23, 2011
  23. MarkKennedy

    MarkKennedy Registered Member

    Joined:
    Jun 16, 2011
    Posts:
    19
    And if the vendor's response is in the cloud?
     
  24. IBK

    IBK AV Expert

    Joined:
    Dec 22, 2003
    Posts:
    1,886
    Location:
    Innsbruck (Austria)
    the samples have an age of maximum 1 week. the heuristic engine is maximum 1 week old.
     
  25. malexous

    malexous Registered Member

    Joined:
    Jun 18, 2010
    Posts:
    830
    Location:
    Ireland
    Dennis Technology Labs and AV-Test.org show 100% detection.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.