Av-Comparatives Retrospective/Proactive Test May 2009

Discussion in 'other anti-virus software' started by guest, May 27, 2009.

Thread Status:
Not open for further replies.
  1. kfjhfbf

    kfjhfbf Registered Member

    Joined:
    May 29, 2009
    Posts:
    2
    stefan please answer :)
     
  2. sourav_gho

    sourav_gho Registered Member

    Joined:
    May 22, 2009
    Posts:
    141
    So is using HIPS, which can block most. But the issue you told about is layered security is not good, I proved my point then, that it is better, and can keep our system safe to the extreme(you are urself suggesting that);)
     
    Last edited: Jun 1, 2009
  3. IceCube1010

    IceCube1010 Registered Member

    Joined:
    Apr 26, 2008
    Posts:
    963
    Location:
    Earth
    That is terrible. I hope they change their views and make SBIE compatible.

    Ice
     
  4. steve1955

    steve1955 Registered Member

    Joined:
    Feb 7, 2004
    Posts:
    1,384
    Location:
    Sunny(in my dreams)Manchester,England
    That's only "his opinion"(or one he's told to have!):-I know a few "Microsoft Certified Engineers" who I wouldn't let loose on any system,all it tends to prove is that they have paid to gain a qualification
     
  5. nosirrah

    nosirrah Malware Fighter

    Joined:
    Aug 25, 2006
    Posts:
    560
    Location:
    Cummington MA USA
    My 2 cents on this matter .

    I wonder if anyone ever thought about using the vast amount of unbiased data already on the web to do a study ?

    It should be easy enough to get a rough idea as to what % of the general population uses each of the major antivirus vendors .

    Take those same vendors and track how frequently they should up in help forum threads where the user is asking for additional malware removal help .

    Now you will have how frequently each vendor is used and also how frequently these vendors also show up in an environment where they have failed .

    You could then devise a scoring system where their score is determined by scaling the frequency of failure against what % of the population is using them .

    This of it like this :

    Vendor (A) is used 75% of the time .
    Vendor (B) is used 30% of the time .

    Vendor (A) shows up in 90% of help request threads .
    Vendor (b) shows up in 15% of help request threads .

    (does not have to add up to 100% as some people do double up)

    Vendor (A)'s score would be 90 * (100-75) or 2250
    Vendor (B)'s score would be 15 * (100-30) or 1050

    Obviously 0 is the best you could do by having no help forum requests regardless of your popularity . The best scores would always be a combination of massive popularity yet still very low failure rates .

    The scores should reflect a more accurate representation of real time and real world performance . Each help request stops time for each individual infection and vendor making the age of the sample irrelevant . Age of samples is becoming a growing complaint and rightfully so , this type of research would at the very least take the age of the samples out of the equation .

    Any test where the sample are more than a few hours old only represents how well an application adds legacy defs AFAIK . Think of it this way . You can get a 99% one most of the current testing models while scoring a perfect 0 in the real world and here is how .

    The test has 100,000 samples from the last year . 99% would be 99,000 . What if the 1,000 missed were all from this week and the other 99,000 were from before that . The test would give wildly inaccurate and dead on accurate results at the same time depending on how you looked at them .
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.