MRG Effitas 360 Degree Assessment & Certification Q2 2020 https://www.mrg-effitas.com/wp-content/uploads/2020/09/MRG_Effitas_2020Q2_360.pdf
Defender won the performance test yet AV-Comparatives puts it way at the bottom towards the heaviest AVs and I also can feel it on any system that has Defender on it how sluggish it becomes even when browsing file explorer or installing apps. I take this test with a grain of salt.
It is well known that performance impact is very difficult to gauge as it is by and large related to individual systems which are all different in terms of hardware and software configuration. In my case I used Avira Pro and Kaspersky Internet security, and I can attest that on my machine MS Defender is now as fast as the above mentioned AVs. It was not always so though, as recently as 2 years ago Avira and KIS were definitely faster, but I agree tests offer a general indication, the best test is to trial the software on a particular machine.
Widows Defender not sluggish on my computers. In fact, lighter than five other AVs I have tested over the past few months.
Finally a lab test where no product detected all samples on the initial examination mirroring realistic real world malware status. Kudos to MRG on the effort to get malware samples reflecting this.
Yes, and I am greatly impressed that NONE of the tested apps achieved Level 1 Certification. IMO, AV-Comparatives and other test labs had better emulate the severe testing criteria and techniques of MRG-Effitas in order to regain their credibility as being any sort of a basis for selecting between competing AVs. As of now (& probably NEVER) there is no *perfect* app for security protection against any & all threats. Tests which fail to reflect that fact are patently inadequate tests. When I see test results where the charts show some security apps as having attained 99% or greater protection, and NO apps with significant failures, I instantly am bored and read no further. As of now, tests by MRG-Effitas stand alone in holding my full attention in that respect.
I miss Kaspersky in this test also. They are usually included in all major tests so IDK why they didn't participate now.
My best guess is it didn't like the results shown for its 2020 Q1 test of its small business office product.
They missed 10% on the Exploit/fileless samples test. Also a few financial malware detection's were 24hr.
Win Defender actually scored 100% for the first time when it came to blocking exploits and fileless malware, I guess they have improved their behavior blocker. They also did well when it came to false positives. However, only Bitdefender and Symantec blocked 100% of the "in the wild" malware test, so perhaps Norton should be bashed a little bit less.
Not really: -EDIT- https://docs.microsoft.com/en-us/wi...tion/microsoft-defender-atp/edr-in-block-mode. Really believe the detection is being made in the Azure cloud sandbox. In other words, the process is in a suspended state awaiting final determination from the cloud.
That's true, they've missed 1 sample. If there were only 2 samples used in test instead of 10 they could even miss 50% With such small sample sizes I don't care much about percentages, only absolute numbers are of interest to me. EDIT: some interesting discussion about randomness here: https://malwaretips.com/threads/randomness-in-the-av-labs-testing.104104/
This is confusing, so was Win Def tested together with Win Def ATP? Or is this feature also available on Win 10 Home?
Err ....... The test was for Enterprise/Commercial security solutions. As far as WD ATP goes, its an optional subscription service.It's only including by default in the Win Enterprise solutions which BTW are subscription based. And I believe the minimum requirements are now Win 10 Pro version and above.
OK, my bad. So it was actually the Win Def "Pro" version that was tested. So there is a big change that without the ATP component, Win Def AV would have once again failed to block file-less malware.