Per the test report: Also refer to appendix D. WD score for web downloaded malware was 69/75. It was the lowest scoring product in this category but still somewhat respectable. Where WD completely "dropped the ball" protection-wise was in the targeted malware category scoring only 7/25. It goes without saying it was also the lowest scoring product in this category. Bottom line - WD in "out of the box" configuration is substandard in protection. Microsoft implicitly acknowledges this by providing additional advantaged Windows Defender Exploit Guard protections that must be manually configured for maximum protections. Additionally in regards to those, its advanced ASR protections are only officially supported on the Enterprise Windows version.
The thing is that lately Win Defender has performed pretty good in recent tests, so that's why I was a bit surprised. So perhaps it's not all that good after all. I also believe that this test was not sponsored.
Someone over in the Eset forum stated it better than I could as to what many vendors do in regards to "prepping" for these comparatives;
there has been a number of suites that was shown to perform excellently while in reality it's a complete utter garbage like: Panda, McAfee
R- Understand that this is coming from someone who has WD active on Win10: Frequently I get new malware from old colleagues. Rarely if ever does WD sound a peep on unzipping the malware, or on run (in a VM, of course), However if I ever plug in a flash drive with naked (non-zipped) malware on it that are 3 days old or older, WD will light up like the Sun, deleting the stuff and pissing me off. Point being, if you want something that will detect older malware, WD will fit the bill. If you want true zero day coverage use CF. (ps- Understand that the "Pro" AV testing sites will never ever (never ever) use true zero day samples- which is what will be found currently in the Wild- this type of testing is actually extremely difficult to do. It really amazes me that so many folks just lap up the results from what amounts to be legacy malware tests as hogs do of slop (going back to my Country Girl days here) instead of demanding true Zero-Day Real Life results. You guys should be Outraged!!!) M
I am. Case in point. A while back, Microsoft got a lot of free press in regards to this "supposedly" 0-day malware detection by WD ATP: https://cloudblogs.microsoft.com/mi...spoils-a-massive-dofoil-coin-mining-campaign/ . So I decided to do my own "detection claim" detective work. I went to VT and did a hash lookup using the one provided by MS in the article. I noted that many of the major AV solutions also were detecting it. Then I noticed that most of signature detection dates were at least one day prior to the date MS noted they detected Dofoil. What people didn't realize is this was a variant of the Smoke Loader malware. As such, it wasn't a new 0-day per se. What folks fail to comprehend is the major AV's employ Yara-like behavior signatures that detect variants like this and did so long before Windows Defender ATP even recognized the new malware strain. What this incident does show is that no one "checks out" Microsoft claims of "superior" malware detection capability. Not surprised on that one since most security publishers just copy what is posted from the source. I didn't post my findings on Wilders at the time since it was during one of those periods I was tired of reading about all the "latest and greatest" band-aid security solution mechanisms Microsoft was pushing.
Since avast now owns AVG, my understanding is that they're sharing the same technology... that the primary difference between them nowadays is just in the User Interface. However, this test clearly found significant protection differences between the two (with AVG "beating" avast itself). Anyone have some thoughts/explanations here?
This also needs further comment. I have been tracking WD signature development for some time. CS is right. It takes MS at least 3 days to develop a positive signature for malware. Contrast that with the major AV vendors who can do the same in a 4 - 8 hour window. In the interim, they have blacklisted the malware via hash. Per CS's statement, Microsoft can't even do that since native SmartScreen relies on "the-mark-web" being present which doesn't exist on USB drive files. But most important is the major AV vendors can scan for malware in archives without having to have to unzip them. Microsoft is listed among vendors on the AMTSO compressed malware test here: https://www.amtso.org/feature-settings-check-download-of-compressed-malware/ . Suspect what is happening is WD is examining files as they uncompress. Such allowance is also given on the AMTSO Cloudcar test in that WD only detects the download after it has hit the disk which strictly speaking not correct since the download should be detected via web filtering signature methods prior to hitting the disk. Obviously that can't happen with WD since it doesn't have such a web filter; SmartScreen detects via blacklisting/whitelisting.
SE Labs tested the free versions of each. I believe your statements about the paid vers. of AVAST/AVG being the same is correct. Obviously, this doesn't hold true for the free versions of each.
you want an anti-exe? cause that's how you get one unknown to WD aka everything that isn't published a week ago and widely used will be blocked false positive machineeeeeee
I'm just curious to know if someone has experience with that setting and how it performs against true zero day malware. It's normal that you'll get more FP like that but I'm more interested in the added security you get: does WD become bulletproof?
Actually, WD file blocking is implicitly controlled by the "block at first sight" feature which has been around since ver. 1703. Ver. 1803 just added further granularity options to this setting. Also these new settings like modification to the old setting have to be done via Group Policy which is not available in the Win 10 Home ver.. The big new enhancement for ver. 1803 is: https://docs.microsoft.com/en-us/wi...ock-at-first-sight-windows-defender-antivirus Also it is not WD that is determining file unknown status, it is native SmartScreen doing this activity. All WD is doing is automating the manual determination process required with a SmartScreen alert. Important to note is that SmartScreen runs as a medium integrity level standalone process in Win 10 in stark contrast to the reputational scanners used by the major third party AV vendors which incorporate it within their protected kernel processes. As such, SmartScreen can be easily disabled by malware. It has been "beefed up" in recent Win 10 versions in that it will restart itself if disabled but malware only needs milliseconds to complete any associated bypass processing.
It should also be noted that WD "block at first sight" is the primary reason for WD's dramatic recent improvement in AV Labs realtime tests performed by AV-Comparatives, AV-Test, and Malware Research Group. These tests do not penalize a vendor for false positives and detection requiring user interaction. Also most consider a detection within 24 hours on the same level as detection at initial test time. Finally, the realtime tests are restricted to web delivered malware via browser URL download with samples numbering 400 or less. All these vendors additional periodically perform "full spectrum" dynamic tests. These tests include thousands of malware samples deployed through multiple deliver methods. Additionally and most notable, vendors are penalized in these tests for false positives and user interaction activity. AV-Comparatives for example uses a 50/50 rule in regards to user interaction; i.e. 50% of actions are wrong and result in user compromise, which is most generous in my opinion. In this type of testing, WD has historically performed poorly ranking in the bottom quartile. Finally, AV-Comparatives produces an annual protection summary report. The monthly realtime test results are summarized into semi-annual results and those results averaged. These results along with other tests within the year such as the dynamic tests are then combined and averaged into one annual protection result. For 2017, WD was ranked last.
OK, so what you guys are saying is that Win Def is pretty bad when it comes to blocking zero day malware, and that's why it performed bad in this test. Are you sure they don't monitor false positives? Here some more info: http://www.winhelponline.com/blog/defender-block-at-first-sight-cloud-protection/ https://demo.wd.microsoft.com/
Yes. As I posted previously, the AV labs "full spectrum" dynamic tests do penalize for false positives and user interaction. The real time tests do not. SE Labs on the other hand does factor in the same in its real time tests. I believe it is the only AV Lab to do so in this test category.
Note that MS like Norton focus on prevalence of the malware. WD (on Home versions) doesn't have real default-deny feature, so i never expected it to catch unknown threats.