MRG Effitas 360 Assessment & Certification Programme Q 1 201 7

Discussion in 'other anti-virus software' started by guest, May 11, 2017.

  1. guest

    guest Guest

    Source:

    https://www.mrg-effitas.com/wp-content/uploads/2017/05/MRG-Effitas-360-Assessment-2017-Q1_wm.pdf

    17 Applications Tested

    386 In-the-Wild malware samples used

    Interesting results ...more in the link above
     
  2. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,883
    Location:
    Slovenia, EU
    Indeed some interesting results...
     
  3. plat1098

    plat1098 Guest

    Oh, ouch! Avira must be celebrating, however. I feel for Malwarebytes right now.:doubt:
     
  4. itman

    itman Registered Member

    Joined:
    Jun 22, 2010
    Posts:
    8,593
    Location:
    U.S.A.
    MRG used Edge as the browser for this test. Unless MRG specifically disabled Win 10 native SmartScreen and I see no reason why they would since its enabled by default, the argument that SS was disabled for Windows Defender poor test scores is a moot point. The default setting for Edge is to use native SmartScreen. @Zoltan_MRG comments on this?

    Eset and HMP scored 100% in ransomware protection.

    Interesting comments by Kaspersky as to reasoning for their lower test scores.
     
    Last edited: May 11, 2017
  5. ArchiveX

    ArchiveX Registered Member

    Joined:
    Apr 7, 2014
    Posts:
    1,501
    Location:
    .
  6. plat1098

    plat1098 Guest

    I see that but I question whether it's not HitmanPro Alert which has all the real-time modules. HitmanPro is the adjunct on-demand scanner. Which is it?
     
  7. guest

    guest Guest

    HitmanPro (on-demand scanner):
    Page 7: "Security Applications Tested [...] SurfRight HitmanPro 3.7.15 - Build 281"
     
  8. itman

    itman Registered Member

    Joined:
    Jun 22, 2010
    Posts:
    8,593
    Location:
    U.S.A.
    I believe HMP-A protections are now incorporated into the paid vers. of HMP?
     
  9. Martin_C

    Martin_C Registered Member

    Joined:
    Dec 4, 2014
    Posts:
    525
    These MRG tests ...
    You can call them a lot of things, yet "interesting" are not one of the words that pops up in my mind.

    As everyone remember from the last MRG test, then MRG told they disable SmartScreen and UAC in all tests.

    Not very smart to disable URL-filter in Edge, when they (and everybody else, rightfully) praise having URL-filtering in the kill-chain.
    Not very smart to disable UAC, since first of all it enables samples to elevate freely. And it also disables SmartUAC which in Windows 10 triggers a deep scan from WD of anything that tries to elevate privileges.

    Remembering a little further back, MRG wrote they disables automatic sample submission.
    Not very smart to disable automatic sample submission in WD, since that also disables the Block at First Sight feature in WD.

    Finally I notice that graphs and written test results do not match.
    In the Financial Malware section we can read that out of a sample set of 67 samples, Microsoft missed 0 samples, 4 samples blocked by behavioral blocking and 63 samples where auto blocked.
    Yet the graph claim that there was missed samples ?

    I must say that these MRG tests seems to be a complete mess, as already discussed in the last MRG thread some months ago.
     
  10. guest

    guest Guest

    nope
     
  11. Nightwalker

    Nightwalker Registered Member

    Joined:
    Nov 7, 2008
    Posts:
    1,387
    +1 Great post
     
  12. garrett76

    garrett76 Registered Member

    Joined:
    Mar 18, 2014
    Posts:
    221
    Another strange thing of this test: Avast and AVG share the same identical engine. So why AVG scores a better result?
     
  13. guest

    guest Guest

    +2
     
  14. guest

    guest Guest

    because their pro-active technologies.

     
  15. plat1098

    plat1098 Guest

    Thanks, mood, clearly it can be ASSUMED I did not click and load the link, I only looked at it. This would pertain to detection via HitmanPro, then, not protection via the Alert.
     
  16. itman

    itman Registered Member

    Joined:
    Jun 22, 2010
    Posts:
    8,593
    Location:
    U.S.A.
    Also appears MRG changed their scoring criteria in that any detection other than auto-block was considered a failure. That is, user determined action via behavior detection was a failure. Explains Symantec's low scores.
     
  17. Dark Star 72

    Dark Star 72 Registered Member

    Joined:
    May 27, 2007
    Posts:
    778
    If you look at page 11 Financial Malware of the report you will notice that Webroot has a red portion at the top of their column which indicates it failed on 2 or 3 items yet the score breakdown at the bottom gives a score of 100% with no failures. Not the first time we have seen this 'mistake' with Webroot in MRG 'tests! I wonder what the chart will show when it's corrected :confused:
     
  18. Macstorm

    Macstorm Registered Member

    Joined:
    Mar 7, 2005
    Posts:
    2,642
    Location:
    Sneffels volcano
    :thumb:
     
  19. Hiltihome

    Hiltihome Registered Member

    Joined:
    Jul 5, 2013
    Posts:
    1,131
    Location:
    Baden Germany
    With smart-screen disabled, and other security features of WIN10 disabled, this "test" is not worth a penny, or a cent.
     
  20. guest

    guest Guest

    :thumb:
     
  21. anon

    anon Registered Member

    Joined:
    Dec 27, 2012
    Posts:
    8,005
  22. guest

    guest Guest

    Hence my following post on the other thread.

    Anyway now, there is no standalone WD, it is part of a wider mechanism (aka Windows Built-in Security, all grouped into WD Security Center) ; so test it with all features it works with or don't include it in the test or test it only with other single scanners.
     
  23. avman1995

    avman1995 Registered Member

    Joined:
    Sep 24, 2012
    Posts:
    944
    Location:
    india
    guest,avast and AVG both share each others cloud and IDP engines from what we know.I have raised this question of variation in detection to avast anyway.

    My queries here are:

    1)Was there any time gap between testing of a URL on each machine with a product? If so the test is unfair even a half hour time frame is enough for a AV cloud to flag a previously undetected malware as bad.

    2)I read their criteria where they explain "missed/bypassed" okay so what if the sample was anti-vm as most ransom malware is now days and did nothing to the system anyway.They do support anti-vm samples that won't do dirty work in a VM.If you ask me this is the problem if the file doesn't do its dirty work BB cant stop it.Kaspersky has made a argument about 2 malware files that were broken and did no harm.
    In the Financial Malware section we can read that out of a sample set of 67 samples, Microsoft missed 0 samples, 4 samples blocked by behavioral blocking and 63 samples where auto blocked.
    Yet the graph claim that there was missed samples ?
     
  24. guest

    guest Guest

    they didn't precised this.

    indeed.

    the report graphs has many (voluntary?) issues. People just look the graph , they barely read the small description under...
     
  25. Rasheed187

    Rasheed187 Registered Member

    Joined:
    Jul 10, 2004
    Posts:
    17,559
    Location:
    The Netherlands
    Yup, seems to be the same old sorry. Certain fans-boys can't deal with the ugly and messy truth and come up with the same lame excuses again. A big thanks to MRG for exposing this stuff, so Win Defender is still crap even on Win 10, and Malwarebytes really should stop promoting itself as an AV replacement.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.