AV-Comparatives: Real-World Protection Test October 2014

Discussion in 'other anti-virus software' started by FleischmannTV, Nov 12, 2014.

  1. Firecat

    Firecat Registered Member

    eScan uses it too.
     
  2. Firecat

    Firecat Registered Member

    AVG has improved. I guess the new cloud technology does work afterall.
     
  3. Macstorm

    Macstorm Registered Member

    Nope, I don't think Win8.1's Windows Defender is that bad.
     
  4. anon

    anon Registered Member

    Last edited: Nov 12, 2014
  5. harsha_mic

    harsha_mic Registered Member

    i am little unclear here. is ever smart screen tested for its protection?
    I believe it would alteast give a pop-up if a program is relatively new and untrusted? Can it be considered as something like cloud anti-exe?
     
    Last edited: Nov 13, 2014
  6. avman1995

    avman1995 Registered Member

    The sample set is way to small for a month.Considering thousands coming out everyday,in reality we have millions of malware for every month.So a few 100's vs 10000000.Where is the catch?! So avast missed 4.1% right?? Not a big deal if you know mathematics.

    Just do the math and its just 24.2 samples missed for avast! Big margin ehhh!! I am sure all AV's companies must be getting the missed samples after the test is done so everyone must have created definations for them by now.
     
  7. anon

    anon Registered Member

    Calm down and look the test results.

    Doesn't matter if you like the results or not, it's necessary to point that sample set is the same for your beloved AV and all the rest?
    --------------------
    lol...........
    You forgot an excuse for AV-Test.org also........
    ------------
    Aug. results:
    https://www.wilderssecurity.com/thre...n-test-august-2014.368252/page-2#post-2409763

    Sept. results:
    https://www.wilderssecurity.com/thre...for-september-2014.369278/page-4#post-2423541
    ------------------------------
     
    Last edited: Nov 13, 2014
  8. Minimalist

    Minimalist Registered Member

    If you know how many viruses are out there, you can calculate how big a sample must be to get desired level of confidence. You can use this simple calculator: http://www.surveysystem.com/sscalc.htm
     
  9. FleischmannTV

    FleischmannTV Registered Member

    So if you do the math and take 10.000.000 samples (your figure) into account, even a missed samples percentage in the lower one figures leads to a missed set in the hundreds of thousands.
     
  10. ArchiveX

    ArchiveX Registered Member

    Fortinet ahead of many big players...:eek:
     
  11. Minimalist

    Minimalist Registered Member

    If sample is too small it can't be used as representative. Confidence level is too small and we can disregard whole test...
     
  12. FleischmannTV

    FleischmannTV Registered Member

    I understand, thank you.
     
  13. Osaban

    Osaban Registered Member

    You may have a point there, and it might also be valid for one, two, three tests situations but let's face it Avast has been mediocre for quite a while now, something is definitely amiss... Maybe its default configuration is to blame.
     
  14. SLE

    SLE Registered Member

    But only if it is a random sample. If different malware types are systematically in the sample, a small sample can be more valid than a large one ;)
    So this "mathematics" from true indian with xxx.000 samples a day vs. a small amount in testset is not valid. Cause the samples are often very similar, belong to families and signatures / or proactive components catch more than one. Beside that: prevalence data play a huge role in many professional (sig!) tests...
     
  15. Minimalist

    Minimalist Registered Member

    Yes I agree. A lot of parameters could be used when analyzing performance (like virus distribution paths, computer usage scenarios...) but those are hard to include in tests. Since we don't have some central database of all malware, it's also hard to get "random" sample.
     
  16. kdcdq

    kdcdq Registered Member

    Yes, finally a test where AVG did not finish in the "also ran" category. :thumb:
     
  17. tgell

    tgell Registered Member

    Congrats to Qihoo 360 but the test uses V5. What's up with that. The U.S. version is still at 4. I wonder if AV-Comparatives will ever test Total Security?

    Trend is looking really good. I hope avast with NG will start showing better results.
     
  18. 93036

    93036 Registered Member

    Love Fortinet...just too heavy for older systems...

     
  19. zfactor

    zfactor Registered Member

    fortinet is very good especially for free except its super heavy imo
     
  20. zfactor

    zfactor Registered Member

    maybe we will see this again next time if so ill be happy for avg. though im not sure (lol)
     
  21. Brandonn2010

    Brandonn2010 Registered Member

    Thank you. I knew there was some statistical rule for figuring how large a sample size has to be for a sample size to be meaningful.

    Regarding the test, seems Panda had a bad month.
     
  22. Stefan Kurtzhals

    Stefan Kurtzhals AV Expert

    Avira results are also very disappointing :gack:
     
  23. ance

    ance formerly: fmon

    ... but it's still better than Avast and Lavasoft. :D
     
  24. garrett76

    garrett76 Registered Member

    Not so good performance by Panda too. I was used to see 100% :D
     
  25. Stefan Kurtzhals

    Stefan Kurtzhals AV Expert

    Anything below 100% is boring. Above 100% is the exciting stuff :D
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice