Security vendors question accuracy of AV tests

Discussion in 'other anti-virus software' started by Thankful, Jun 26, 2007.

Thread Status:
Not open for further replies.
  1. Thankful

    Thankful Savings Monitor

    Joined:
    Feb 28, 2005
    Posts:
    3,736
    Location:
    New York City
  2. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    That is absolutely bullshit.
     
  3. MalwareDie

    MalwareDie Registered Member

    Joined:
    Dec 8, 2006
    Posts:
    500
    TOtAL BS. I agree with IC.
     
  4. Inspector Clouseau

    Inspector Clouseau AV Expert

    Joined:
    Apr 2, 2006
    Posts:
    1,329
    Location:
    Maidenhead, UK
    You do not have any other option :D
     
  5. Technodrome

    Technodrome Security Expert

    Joined:
    Feb 13, 2002
    Posts:
    2,140
    Location:
    New York
    FIXED! (-:


    tD
     
    Last edited: Jun 26, 2007
  6. FRug

    FRug Registered Member

    Joined:
    Feb 7, 2006
    Posts:
    309
    lol, the wildlist alone has 766 entries at the moment. VB may use less samples than Marx or Clementi, but not THAT few.
    In addition to the wildlist they test additional larger sets like the "standard" set, macro sets, a few thousand poly samples etc....
    I agree they use less samples.... but 30-50 is WAY off from the correct number....


    I guess the sentence in the article totally confused some of the special HIPS proactive tests with on-demand scans...
     
    Last edited: Jun 26, 2007
  7. pykko

    pykko Registered Member

    Joined:
    Apr 27, 2005
    Posts:
    2,236
    Location:
    Romania...and walking to heaven
    fixed on the forum, but the website still shows the wrong info.
     
  8. Technodrome

    Technodrome Security Expert

    Joined:
    Feb 13, 2002
    Posts:
    2,140
    Location:
    New York

    I meant this to be a joke. ;)


    tD
     
  9. De Hollander

    De Hollander Registered Member

    Joined:
    Sep 10, 2005
    Posts:
    718
    Location:
    Windmills and cows
    Before I can agree with you :D [learning mode] Why [learning mode] :)
     
  10. herbalist

    herbalist Guest

    About the only thing those tests are good for is advertizing. All they do is tell you how each one did against a very specific set of samples at a specific moment in time. The results are almost meaningless.
    The article. The vendors complaints. The tests themselves. All BS.

    I hate to see what they'll come up with for more comprehensive testing of suites, and how badly the results will be misused to discriminate against single purpose security apps.

    Rick
     
  11. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    7,927
    Location:
    The land of no identity :D
    I am holding a very neutral stance on this issue for now, however I will say that I was expecting an article of this sort to be published sooner or later.
     
  12. C.S.J

    C.S.J Massive Poster

    Joined:
    Oct 16, 2006
    Posts:
    5,029
    Location:
    this forum is biased!
    this didnt surprise me, although i dont think its completly BS, just a few disagreements.

    i still want to see a removal test for the tested products.
     
  13. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,057
    Location:
    North Carolina
    I think calling it BS along with other tests that are performed is childish. I think there is some validity to all of these tests. Just how much you want to put into them, is entirely up to you. So to each his own.
     
    Last edited: Jun 26, 2007
  14. RejZoR

    RejZoR Registered Member

    Joined:
    May 31, 2004
    Posts:
    6,426
    I think they mixed up the numbers lol. It should be vice versa...
     
  15. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,057
    Location:
    North Carolina
    BS is every where.
     
  16. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    8,698
    Hello,

    I think any sort of testing is legitimate. Whether users should accept the findings is a different story.

    I think the best way of testing AV would be:

    - Pay 10 random script kiddies and 10 proclaimed hackers to collect, assemble, compile, and prepare a random set of 1,000 samples, including placebos, innocent programs, web exploits etc.

    - Test the AV for a full week in an isolated environment (virtual LAN) where both experts but also novices will work machines and have to interact with exploited web sites, pdfs, docs, screensavers, links in chat etc.

    - Compile results and present to the public.

    - 10 kiddies / hackers can be replaced with any number of people capable of writing code, collecting code, finding malware samples etc.

    Mrk
     
  17. herbalist

    herbalist Guest

    It would be much more realistic than the testing they do now, where you don't know where the next threat will come from. Instead of just AVs, open the tests up to other security apps like sandboxes, HIPS, virtualization, etc and include people who know how to use them. Open up the choices of operating systems, not just XP or Vista. Add a few alternatives to Windows and a few older systems to reflect real world usage. The test should include at least one computer hobbyists system. I'll volunteer mine. :D

    Signatures + Heuristics
    vs
    containment
    vs
    a virtual system
    vs
    default-deny
    vs
    not commonly targeted
    vs
    secure by design.
    I couldn't imagine an AV vendor ever agreeing to a test like that. Too embarrasing for signature based apps but the alternatives would shine.
    Rick
     
  18. herbalist

    herbalist Guest

    Mrkvonic,
    Had an idea for testing, a variation on yours. Instead of an isolated network environment, how about an obstacle course? Have a group of hackers create a series of web pages using exploits, embedded code, whatever methods they choose, containing all kinds of user file types and types of active content with the intent of defeating security-ware and compromising the computer. The computers/apps being tested have to be fully functional, no disabled JS, Java, text-only browsers, etc, functionality typical of the average internet computers. Links from one test page to another could be in the active content to ensure compliance. Users have to open the links in order to finish the course, no avoiding the malicious content. Systems must withstand the attack. This duplicates the average user who clicks on anything. No routers, no hardware firewalls. Strictly testing software and the computers themselves. A few simple rules like no destructive malware. No previews of the course. You find out what the threats are when you get to them. System snapshots before and after the course to judge the effectiveness of each app or package. Let the malware writers rate the apps and packages.
    Rick
     
  19. Mele20

    Mele20 Former Poster

    Joined:
    Apr 29, 2002
    Posts:
    2,495
    Location:
    Hilo, Hawaii
    This thread is very confusing. Some of you are replying based on a joke fix. :(

    Maybe this thread should be canned and started over.

    I have no objection to adding tests for suites but those results would not interest me. I don't use suites. I like the layered approach.
     
  20. LoneWolf

    LoneWolf Registered Member

    Joined:
    Jan 2, 2006
    Posts:
    3,408
    A wise choice.:cool:
     
Loading...
Thread Status:
Not open for further replies.