An interesting article can be found here: https://arstechnica.com/information-technology/2017/04/the-mystery-of-the-malware-that-wasnt/ Our comment replying to: "For its part, Cylance denied the screenshot actually showed Protect's code. "It’s a hex view of a sample packed with MPRESS and VMprotect, it looks like," a Cylance spokesperson said in response to that allegation. "It’s a sample from TestMyAV, I believe. It's malware, not Cylance."" can be found here: https://www.youtube.com/watch?v=swXrBKoTVv4
Wow! One of the best articles I have read on the current state of AV lab testing especially the current shenanigans' with testing Next Gen AI/ML products. To me, the following sums up the main issue: It has become fairly obvious that "fudging" has escalated in recent testing although as the article pointed out, it has always existed to some degree. If the AV lab industry wants to save whatever little is left of their reputations, it is time they jointly get together with AMTSO which most belong to and agree to a fair and impartial way of gathering malware test samples. For me that entails using an independent source not connected in any way with AMTSO and the AV/Next Gen vendors. Additionally, the samples must be kept confidential and not disclosed to the parties being tested. To ensure this, separate different samples should be supplied to each AV lab. The malware categories will be the same but the malware selected different. Of course, this is going to cost far more than current methods being employed. But not doing so will in short order basically put the AV Labs out of business since no one will trust their test reports as an evaluation factor in the security product purchase decision. AMTSO then can quarterly publish an "averaged" test result report of the individual AV Lab test results which would give a prospective purchaser a good idea of a particular security product capability. For example if a security product participated in numerous tests and its average score was high, it would be a better indicator of malware effectiveness than a vendor who only participated in a single test and scored well in that test. This would be so since the multiple test vendor was exposed to a wider variety of malware samples. Note this contrasts with the current method of using a "standard" malware database provided by AMTSO which many members utilize. Finally, the samples supplied are real malware and not "synthetically" developed bogus ones. For synthetic malware testing, a separate test category should be created called penetration testing which indeed is what is being performed.
There is no such thing as "real world " tests, unless you put 1000 ***** each behind their own computer, and deliver malware to them via traditional methods (email, usbs, etc...) then observe the reaction of the soft based on those ***** behaviors.
Everything's fine, says Cylance, as 'one in five' workers given the boot.... https://www.theregister.co.uk/2017/04/05/cylance_restructuring/
As if they are hurting for cash- they've gone through 4 funding series that netted a total of 177 million USD. They also have the best Board of Directors money can buy- from Mark Weatherford of Palo Alto fame who was Michael Chertoff's hitman at the Chertoff Group (Chertoff was the co-author of the US patriot Act), to a 4 Star Admiral (W. Fallon), to the former CISO of the CIA Robert Bigman (and I won't even mention riff-raff like Mark Hatfield who was the former administrator of the Transportation Security Agency. That is the preferred way of assuring lucrative contracts- connections are everything. And the company will soon go public with an IPO that is estimated to net the company north of 1.5 billion USD. In short, I doubt that they give a Flying xxxx about this controversy, and when they say everything is fine, actually everything couldn't be better...
Which again confirms scientific research studies that state stand alone AI/ML security is not ready for mainstream production deployments and will not be so for at least 4 - 5 years. If there is any evaluation measure that should be employed in regards to security software, it is if it can "stand the test of time." The past has shown many promising solutions that were in short order abandoned.
Someone over at malwaretips.com made an interesting comment in regards to the packed malware baloney Cylance employed: Yes and no. What about packed and obfuscated scripts? Well if your using Win 10 and an AV product including Windows Defender uses the AMSI interface, it can scan that script after it unpacks prior to being loaded into memory. Want to bet the Cylance tests were done on non-Win 10 OSes? Unfortunately, most corps are still running Win 7.
And there is a difference between Packing, Obfuscating and Wrapping. Wrapping is effective because it usually delivers the payload before the legit software start installing, so even if detected , the unaware users would take it as a FP and allow it.
Latest on Cylance "dirty tricks" here: http://www.securityweek.com/cylance-battles-malware-testing-industry . Any doubt about NSS Labs and Cylance's relationship also clarified.
Yes indeed, seems like a lot of people are complaining about the false positives that Cylance generates. That might be a deal breaker for companies, even though I still think their tech is interesting.