Securing the browser - UAC/LUA/AL, Protected Mode, Sandbox, VM, HIPS, AV?

Discussion in 'other security issues & news' started by katio, Jun 6, 2010.

Thread Status:
Not open for further replies.
  1. katio

    katio Guest

    First of all: Hello everybody. I'm new here, but I've been visiting this forum for quite a while with great interest. I don't think it's far-fetched to say this is the very best place on the internet to ask this sort of question:

    I'm talking about "securing the browser" not "securing the OS" here. Why? Because I think for a more or less typical tech-savvy user, equipped with Common Sense 2010 practically the only way to get "owned" is by using your web browser. Network attacks aren't likely behind a consumer router and without internet facing services. Email is out because virtually everyone seems to be using webmail these days. What's left are trojans and other rogue programs the user installs knowingly; common sense helps a lot: don't install from untrusted sources. Then there are all sorts of software vulnerabilities, buffer overflows and so forth. The way you get infected by them is by maliciously crafted files - of course downloaded by your browser. And the most common security breaches involve drive-by-downloads, compromised webservers, malicious flash, java and javascript, all sorts of remote code happily executed by the browser. Port 80/HTTP is still by far the most common way new and untrusted and potentially dangerous stuff enters your system. Not to forget, what motivates the bad guys? Money. And where do you enter your bank credentials?

    That's what I'm assuming anyway. Please do correct me if I'm wrong here or if I'm overly simplifying the threat landscape.

    There are lots of different security products, policies, access controls for different attack vectors. What I'm interested in is how do they compare when it comes to this imo by far most important task of securing the web browser.

    AV
    I don't think highly of signature based detection, at all. It's an arm's race we all know who's winning. They offer zero protection from the most scary attack, the 0-day exploit (excuse the pun), introduce lots of code into the OS, thereby increasing the attack surface and usually have a more than negligible negative impact on performance. HIPS share the two latter shortcomings, have the advantage of proactive 0-day protection at the cost of being a lot more noisy and generally slowing you down if you want to get things done. Not to mention they place all the burden of knowing what's good and what's bad (on a low level) onto the user.

    AVs serve a purpose for checking downloaded binaries you don't trust, for that I prefer virustotal or jotti. A HIPS (and outbound firewall) is primarily useful for analyzing apps you run but still don't fully trust. As long as you don't install loads of random software I think you can do pretty good without either.

    VM
    VMs are probably the most robust way to create security through isolation, it's pretty easy to achieve 99% secure browsing, leaving only an exploit in the hypervisor as the really possible risk (disregarding carelessness, social engineering etc.). The main drawback again is performance, ever run a full graphical OS in a VM on a 5.200 RPM laptop drive with <4GB RAM? You need a pretty beefy PC before you can talk about deploying multiple VMs for multiple security zones (like explained here: http://www.tomshardware.com/reviews/joanna-rutkowska-rootkit,2356.html ). Maintaining and setup takes some time, you need to install and patch every VM and occasionally reboot them, which is easily 2-3x slower than on a native install.

    Sandbox
    A Sandbox trades some of that secure isolation for ease of use and performance. I've tried the Comodo Sandbox. My verdict: It's buggy, mouse cursor doesn't work right, clip board can't be enabled without compromising most of it's security settings, no sound in flash. And I've used Sandboxie which is a lot more user friendly (like the way it handles downloaded files) and so far no showstopping bugs. The thing that's holding me back are the limitations when using 64 bit with patch guard. I think it's adequate against unauthorized data access and most malicious javascript but I don't know if and how it protects against all those browser exploits that involve memory corruption for example.


    Now to a specialty of this forum:
    UAC, LUA, SRP, Applocker...
    It got a lot going for it: It doesn't require 3rd party software, no measurable performance impact, it's comparably simple to use (means less likely to make user errors, and hopefully less bugs in it's implementation) and it doesn't require any maintenance after initial setup.
    But how does it perform in terms of security? When properly configured it means you can't install new software (ruling out all kinds of drive-by-download) and any exploit can only run as a limited user. First thing is pretty clear but the second one needs further analysis.

    It has been pointed out numerous times than one doesn't need "root" access to do devastating damages. If all your files and the browser you use to access your bank account run under the same user/token as the malware, you lost. A neat idea to mitigate this is to use separate accounts for different tasks and use file permissions to protect sensitive files. You don't even have to switch users ("Do-It-Yourself: Implementing Privilege Separation" on http://theinvisiblethings.blogspot.com/2007/02/running-vista-every-day.html - haven't tried this yet myself)

    Like above with the Sandbox I have the question, how does this help with buffer overflows and browser exploits like that? If one had a 0-day against the browser plus a privilege exploit against the OS I'm using there's nothing there to stop that, right? I don't even think that's a far fetched scenario, both kinds of exploit are found almost every other day, despite NX, ASLR, compiler hardening and other tricks.


    Full disclosure, my browser of choice is Firefox, three reasons: it's open source, about:config gives me all the control I need and: NoScript. Sadly it doesn't yet use sandboxing/low integrity level or a multiprocess architecture. I've manually set it to protected mode but I'm not sure if that's actually a good idea or if it's even necessary when running as LUA:
    from http://superuser.com/questions/30668/how-to-run-firefox-in-protected-mode-i-e-at-low-integrity-level, second answer+comment.

    Personally I'm not too worried about the common flash, JS, or XSS exploits because I keep my NoScript whitelist really short. What I'd like to improve is protection from 0-day browser exploits that aren't mitigated by using NoScript.

    Well, that's it for my first post :)
     
  2. NoIos

    NoIos Registered Member

    Joined:
    Mar 11, 2009
    Posts:
    607
    You missed the category of software like Shadow Defender and software like Returnil...

    and welcome.
     
  3. Windchild

    Windchild Registered Member

    Joined:
    Jun 16, 2009
    Posts:
    571
    That's a pretty nice first post. ;) Just a couple of thoughts...

    I'd say a combo of LUA and AppLocker (or SRP if on older Windows systems) performs decently well in terms of security.

    It's certainly true you don't need root to do nasty things like stealing some sensitive files from the user. The problem is, you do generally need to run some code to do such nasty things. AppLocker can make it more difficult to get any malware actually running in the user account. Let's say we've got a drive-by download site going and we're serving some LUA-compatible data stealer malware using a bunch of zero-day exploits. You land on our exploit site with your vulnerable browser, and the exploit shellcode runs. But what does the shellcode do? The vulnerability we're trying to exploit will place some limits on our shellcode - such as how large it can be. Typically the shellcode will be very limited in length, and can't do a whole lot of stuff. These days, if you start analyzing the average piece of shellcode out there, you'll find it does nothing more than download a malicious file and then execute it on the target system. And that's where the average exploit would then fail to own the system, since that file the shellcode tries to execute wouldn't run because of AppLocker. Our data stealer (or perhaps more likely a downloader program for the main malware) never gets to execute, and just sits there being useless - and if our browser didn't just crash due to the exploit, we probably won't even notice anything. If you're surfing around with your test system with a LUA & AppLocker combo, that's pretty much how the story goes with the exploit site of the day.

    Yeah, some people do that - and some others consider it overkill for their needs. But if one feels like being extra-extra careful, one can certainly make one limited account for sensitive tasks and another for "fun" and risky tasks.

    Obviously LUA or AppLocker would not remove the software vulnerabilities that the bad guys try to exploit - nothing does, except installing actual patches for said vulnerabilities. LUA, or AppLocker, wouldn't even prevent exploiting said vulnerabilities - that kind of stuff is what hardening features like DEP and ASLR try to accomplish, at least to some level. But what happens when the vulnerability has been exploited and the malicious shellcode is already running? That's the really important part, and that's typically where LUA and AppLocker could save the day, by simply blocking the malware payload of the shellcode from ever actually running.

    Let's look at your scenario where we have a zero-day browser exploit and a local privilege escalation exploit. The big question is, what does your shellcode do. If your plan is to 1) exploit the browser vulnerability to drop a malware executable on the system and 2) have that dropped malware attempt to exploit the local privilege escalation vulnerability, then chances are you fail, because AppLocker won't allow your dropped malware to execute and step two of your fancy plan will fail and the victim did not get owned. Now, if the vulnerabilities you've found allow you to exploit the privilege escalation vulnerability using just the shellcode for the browser exploit or programs that are whitelisted by AppLocker, then such an attack would work and own the entire system, root and all.
     
  4. katio

    katio Guest

    Thank you!
    I didn't miss it on purpose. Since I don't use them I simply forgot. Therefore an amendment:
    I don't consider them to to be a security mechanism as they do nothing to protect your running system. Yes, they offer some kind of virtualisation/privilege separation provided you restore to a clean state between risky and sensitive tasks. Not very practical for your day to day work and I doubt anyone has the discipline to do so. Of course coupled with other, "real" security protection they can enhance it (layered protection).

    @Windchild
    Thank you for your post, really answered a lot. A few things left:

    Does anyone know, is this a theoretic attack, just POC or in the wild?
    Since, as I argued above, the browser today "is (almost) everything", see Chrome OS, how likely is a browser-only exploit that doesn't rely on other code and just hijacks your cookies for example? I've seen a post here regarding browser rootkits. It could hide in the profile folder where the user still has full access to.
    And finally, what's the most practical and realistic way to stop this vector?

    AFAIK a sandbox is pretty secure against all mentioned risks and the only way to break out is to find a vulnerability in the sandbox itself. Accordingly for a successful exploit against a browser, running in a sandbox under LUA and with enforced Applocker one needs:
    a browser exploit
    a way around DEP, ASLR...
    a shellcode that doesn't need to download other code
    a vulnerability in the sandbox (or one of the workarounds that exist on 64bit)
    and a privilege escalation to actually do any damage to the system or other users.
    If my analysis is correct I think such method would not only rule out any real risk from browser exploits we are facing today and the foreseeable future but also please even the most paranoid.
    One thing I've left out: To my knowledge if one had a local kernel exploit one could circumvent any and all security on the system, see my link above to theinvisiblethings. However it's probably very hard to 1) find one, and 2) execute it in such environment and we can dismiss it as yet another proof there will never be 100% security.
     
  5. NoIos

    NoIos Registered Member

    Joined:
    Mar 11, 2009
    Posts:
    607
    You're welcome.
    I think you should reconsider and recheck Shadow Defender. It's for sure a matter of personal taste and ways of work but I find it really easy and practical. I agree with you when you say they do nothing to protect the system ( returnil is an exception...has a built in antivirus ) ...actually they do nothing to protect the virtual system but in my opinion cover completely your original system. Also Shadow Defender has a really easy mechanism to commit changes. I'm not trying to convince you, just stating some facts. For the rest, I repeat is a matter of "taste".
     
  6. Windchild

    Windchild Registered Member

    Joined:
    Jun 16, 2009
    Posts:
    571
    I can't recall even seeing a proof of concept of something like that, so I'd say theoretical, if even that is the right word to use. Let me put it this way: it's not something I'd worry about at this time. Even if you could find vulnerabilities that would let you do this, would it be worth it in an environment where tons of folks are happily running as admin with unpatched systems? Targeted attacks are a different matter, but even when targeting a big target like Google, such complex attacks shouldn't be needed, as recent events demonstrate (big companies letting users browse the net as admin on IE6...) And then there are always those remote code execution vulnerabilities in highly privileged stuff like services, which would get your code running as root when successfully exploited. As Conficker and Blaster worms show, this works more than well enough and isn't complicated, since it requires exploiting just the one vulnerability instead of multiple.

    There are more than enough attacks that don't necessarily intend to get any malware running on the local system: there are attacks to steal data via cross-site scripting, for example. The problem is that often these don't target the browser, or indeed anything on the local system - instead, they target some vulnerability in a web site the human user browses. That makes defending against such attacks tricky. There's still things you could do to make the attackers' life harder. There's the possibility of outright disabling scripting, there are anti-XSS features in some browsers or extensions like NoScript, and then there's always just being extra careful with how you surf (as The example, "don't follow untrusted links" :D ) and trying not to give the bad guys anything to steal (if you delete your cookies all the time, it's going to be hard to steal them, at least all of them, especially the most important ones to services like banking and webmail).

    The very phrase "browser rootkit" hurts my head, though, as it just doesn't make sense. If you're going to install some browser extension or modify a script file the browser uses for some malicious purpose like data-stealing, that's no rootkit, more like plain old browser hijacking given a glorified new name and a nastier purpose (as compared to the old ad-displaying browser hijacks). If the "browser rootkit" is an extension, what stops the user or even an AV software from seeing that extension and realizing it shouldn't be there and is doing something nasty? If it's just a modified script file used by the browser, what's stopping anyone and anything from noticing the file has been modified - what's stopping the browser developers from digitally signing their files to detect such modification?

    I don't quite understand this one. What do we mean by "local kernel exploit"? A code execution flaw in the kernel or some system component running with root level privileges? Something else? In the case of vulnerabilities in the kernel, there are still ways to make exploiting those harder. As always, one would need to get the malicious code designed to exploit the vulnerability running in order to own the system, and things like AppLocker or intrusion prevention software can make it difficult or impossible for the code to ever run, depending on how the attacker is trying to run it (if he's created some little malicious executable, maybe called rootme.exe, that he'd run on the system to exploit the vulnerability, that would be stopped by a ton of security software from HIPS to built-in security features like AppLocker). So I can't quite see how just having a local kernel exploit would allow one to circumvent "any and all security." Depends on how the kernel vulnerability can be exploited and what security exactly is present on the system.
     
  7. katio

    katio Guest

    I just picked it up here.
    After describing UAC, Protected Mode and privilege isolation by using different users Joanna Rutkowska writes:
    Reading this again I realise I somehow got it backwards:
    "could be bypassed by a clever attacker under some circumstances" comes before the "all the security scheme implemented by the OS is just worth nothing". Therefore, even though she didn't mention it explicitly, AppLocker, IDS and so on aren't "magically" circumvented just because one found a bug in the kernel or kernel drivers.
    Maybe it's intentionally worded that way - can't go wrong with a bit scaremongering and sensationalism if you are pushing your own agenda, eh? Because to me the "protection" we got seems pretty adequate. Of course I won't say No to something better than what we have today. The bad guys are getting more clever too after all.
     
  8. Windchild

    Windchild Registered Member

    Joined:
    Jun 16, 2009
    Posts:
    571
    Oh yes. Scaremongering is one omni-present thing in the security industry. I love the choice of words here, for example: "Still, even though that might look like a secure configuration, this is all just an illusion of security! The whole security of the system can be compromised if attacker finds and exploits e.g. a bug in kernel driver." That's a little like saying: "Sure, having all those highly trained and heavily armed agents guarding the president all the time might look like a secure configuration. But it's all just an illusion of security! If the attacker just finds a bunch of suicidal guys to come right in and shoot all the agents, they could just shoot the president too while they're at it. It's just an illusion of security!!!11!" :D I believe that's called life: 100 % security is kind of hard to achieve, especially if you want to actually do something. Not that I see anything wrong with making OS compromise easier to detect and making it easier to verify the OS has not been tampered with. The scaremongering and big words are a problem to me, though.

    The protections available today seem very much adequate to me, as well. I certainly wouldn't mind better stuff, in terms of security, but there's more to consider than just that: how about speed, for example? Running everything in a million different virtual machines might sound like a clever way to keep malicious code away from your online banking, but that's going to hurt performance. I'm using computers to get things done with them, not to mess around with making a config as secure as humanly possible. :D What I like best about LUA and AppLocker is that they have zero performance and stability hit that I can notice and still provide more than enough security for the needs of most any user who knows what they're doing.
     
  9. MrBrian

    MrBrian Registered Member

    Joined:
    Feb 24, 2008
    Posts:
    6,032
    Location:
    USA
    It's true that other low integrity apps can now write to, for example, your Firefox downloads folder, but how many other low integrity apps do you use regularly? Low integrity Firefox can no longer write to many of the file and registry locations that a standard user can write to.

    Here's how I deal with the Firefox downloads folder:
    a) move downloaded files to a non-low integrity folder as soon as possible
    b) copy a small .exe with a digital signature to the Firefox downloads folder and keep it there permanently as a "canary in the coalmine" test file; when moving downloaded files to a different folder, check the integrity of the test .exe to see if any apps have modified it; delete the other downloaded executables if there's been tampering of the test .exe.
     
  10. katio

    katio Guest

    Wow, we had the same idea, I'm already using this in some other context ;)

    Since you use Firefox in low ID, can you tell me if the icacls commands from superuser.com are correct and complete? Anything else I should know since this doesn't seem to be officially supported?
     
  11. MrBrian

    MrBrian Registered Member

    Joined:
    Feb 24, 2008
    Posts:
    6,032
    Location:
    USA
    Please see https://www.wilderssecurity.com/showpost.php?p=1679035&postcount=22.

    I've been running Firefox with low integrity for only a few days. So far, the only problems I've had are with two browser-based Java apps that need to use local resources; Internet Explorer works fine with those two Java apps. Sites that use Adobe Flash cookies perhaps may pose another problem, although I haven't run across it yet.
     
  12. chronomatic

    chronomatic Registered Member

    Joined:
    Apr 9, 2009
    Posts:
    1,343
    I view Rutkowska as sort of a crank -- just slightly above Steve Gibson. She knows enough to come across as proficient and has discovered some clever hacks, but this doesn't mean she is an undisputed Goddess of operating system design. There's a lot of people who have expertise in one area and aren't really qualified to speak in others. We see this a lot in various scientific fields (people like Stephen Hawking speaking about issues that they have no qualifications to speak about, but do it anyway because of who they are). I think this is the case with Rutkowska. She might be a good security researcher, but this doesn't mean she would know how to design an OS (an extremely complex task that requires tons of expertise in many different fields of systems programming, etc.).

    She talks about verifiable operating systems, but she fails to understand that mathematicians disagree over whether such a thing is even possible at all, and if it is, what precisely verification means. There have been a number of projects over the years to create an OS that meets some sort of "verifiable" criteria, but we still don't have one that is good enough for general usage (all we have are some very specialized micro-kernels for embedded devices, etc.). There is no reason to believe that Rutowska is somehow going to create one herself. And even if she could, she would be reinventing the wheel as there are already a couple of open-source projects in the works that appear to be doing this right (by creating a secure programming language for instance).

    And the reason software falls behind hardware in the "verification" arena is simple: software systems are much more complex.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.