More FUD about LINUX security

Discussion in 'all things UNIX' started by linuxforall, Apr 16, 2012.

Thread Status:
Not open for further replies.
  1. m00nbl00d

    m00nbl00d Registered Member

    Joined:
    Jan 4, 2009
    Posts:
    6,623
    Internet Explorer 9 runs in low integrity level by default (= Protected Mode). Another browser which also makes use of low integrity level are Chromium based web browsers. Then we got Adobe (Acrobat) Reader X, which also runs within a sandbox. To note that the sandboxes aren't necesseraly just the use of integrity levels.

    Yes, psexec allows to run apps with low integrity level. You can also use Process Explorer, Process Hacker, maybe others. There's also a tool by security researcher Didier Stevens, runasil. The beauty of these tools, is that you don't have to apply an explicit low integrity level to processes.

    I've seen some users mentioning in the past they run Firefox with a low integrity level as well. Granted, it's not how things should go. Firefox is "cooking" something in the regard, and those are great news. :)

    There's quite a lot one can achieve with integrity levels and ACLs, once you get to know them.

    -edit-

    By the way, Windows Media Player runs fine in a low integrity level. You can use any of the above mentioned tools to execute it. This way, no need to mess with it.
     
  2. According to this, they have other security issues as well: http://kerneltrap.org/Linux/Abusing_chroot It seems to me like there is very little similarity between LXC containers and chroots on Linux, other than how the commands are invoked.

    Edit: umm hold that thought, apparently LXC containers cannot yet be considered secure.

     
  3. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    LXC is built to improve on chroot by creating a virtual environment with copy-on-write.

    Any bypass of chroot requires root (barring some kernel exploit.) Chroot was not designed as a security module, it was designed for application testing. By design you're supposed to be able to get out of chroot if necessary, but it requires root (to call chroot you need root) and you can prevent applications from elevating/ reinforce the chroot. That's the design of it.

    If you use chroot + setuid and stop the process from becoming root it won't be able to escape. That's kinda just the way it works.

    This is similar to: https://en.wikipedia.org/wiki/Jail_(computer_security) or the chroot jail.

    Whereas LXC attempts to create something similar to:
    https://en.wikipedia.org/wiki/FreeBSD_jail

    Any of these things (including AppArmor/SELinux) is not perfect on their own, it's more about the combination. You can use a SUID/chroot sandbox very well as long as the application drops root but it works best if you couple it with apparmor to further lock down any IPC, process spawning, or attempts at DAC_Override. These all go perfectly with seccomp mode 2 filters, which whitelists system API calls thereby limiting visible kernel attack surface. Also worth nothing is that there is chroot(2) and chroot(:cool:, chroot(:cool: being more secure.

    The point I was making is that all of these tools are available for the user and developers to make use of. If a developer and user make use of all of these it makes things very difficult for attackers. That includes making use of chroot.

    edit: Also worth nothing that Chrome's sandbox chroots each process.

    More information on chroot breaks:
    http://www.bpfh.net/simes/computing/chroot-break.html

    So basically, if your chroot sandbox is done properly they need a security hole (most likely a kernel exploit) to bypass it. That's, of course, possible. But there are methods for dealing with kernel exploits as well.

    tl; dr, chroot is actually pretty secure, the problem is that it is by default not a security mechanism but can easily be extended as such.
     
    Last edited: Apr 17, 2012
  4. linuxforall

    linuxforall Registered Member

    Joined:
    Feb 6, 2010
    Posts:
    2,137
    Good explanation Hungry Man, explains it well, thanks.
     
  5. Thank you Hungry Man, that's news to me.

    It does mean that you can still bust out of a chroot if you find a local privilege elevation exploit (which are a dime a dozen these days), but I guess that's what "layered security" is about - better to require that one extra exploit to get to the user's data.
     
  6. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    Local privilege escalation does happen but you can prevent that. Apparmor/selinux profiles can make local exploits more difficult, the chroot itself might not have access to what's necessary. You can still create race conditions/ attack the kernel but seccomp filters works really well to prevent most kernel exploits.

    The combination of chroot + apparmor/selinux + seccomp is incredibly powerful. I was actually talking to someone last night about apparmor's ability to prevent remote exploits by blocking the exploits ability to function properly and they showed an example. It's very strong and very difficult to bypass let alone when combined with chroot + seccomp.
     
  7. Hmm, maybe the security situation on Linux is better than I thought then. It still doesn't seem very good to me by default on a lot of distros, though.
     
  8. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    It really depends on the distro. A distro designed for security will be more secure by default. Something like hardened gentoo comes with tons of security features by default including role based MAC and probably SELinux as well. Ubuntu is made for ease-of-use and therefor it only sandboxes a few services by default, leaving it up to the user to flip the switch and get the rest working.
     
  9. linuxforall

    linuxforall Registered Member

    Joined:
    Feb 6, 2010
    Posts:
    2,137
    Said and done, in all hack shows, Ubuntu prevails over Windows and Mac, so much for default security.
     
  10. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    I don't think that's necessarily true linuxforall. Examples?
     
  11. linuxforall

    linuxforall Registered Member

    Joined:
    Feb 6, 2010
    Posts:
    2,137
  12. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    ??

    Pwn2Own isn't a security benchmark. It's cool and fun but in the end it's more about what they say rather than what they demonstrate.
     
  13. linuxforall

    linuxforall Registered Member

    Joined:
    Feb 6, 2010
    Posts:
    2,137

    Security is perception and mostly a false one, so nothing is absolute in that sense and yes, pw2own is in the same category. However Linux by its very exposure is far more secure than the other two, no ifs and or buts about it.
     
  14. guest

    guest Guest

    For you, maybe. I think user friendly is this: http://en.wikipedia.org/wiki/Usability

    Are you saying that one needs to change his personality in order to use Chakra and/or find Chakra to be user friendly? lol

    Nope. When it comes to protection from vulnerabilities, process trumps “many eyes”.

    Read: http://technet.microsoft.com/en-us/library/cc512608.aspx
     
  15. linuxforall

    linuxforall Registered Member

    Joined:
    Feb 6, 2010
    Posts:
    2,137
    Thats your PERCEPTION and BIAS as well as Delusion. USER Friendly is what you get accustomed to. Those using Linux find it far more user friendly as they are accustomed to it, same goes for Mac and Windows user. The one button Mac mouse is classic case of that same perception. For regular two button mouse users, it is truly a strange unconventional and quirk of Mac for but the users, its heavensent.

    Not if one is stuck in GROOVE and has tunnel vision ;)

    Process is a projected hype, nothing more.
     
  16. guest

    guest Guest

    I think all you wrote is just rhetorical nonsense: you can measure "user friendly" using scientific methods: http://en.wikipedia.org/wiki/Usability#Investigation

    That's not very clear. Elaborate please.

    If that's true, go on and deconstruct/prove false what that text shows: http://technet.microsoft.com/en-us/library/cc512608.aspx

    It must not be hard if "process" is "nothing more" than "projected hype".
     
  17. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    Process is plenty important. Having a conversation on process vs open source is ridiculous. It is not one or the other.
     
  18. guest

    guest Guest

    We are discussing realities, not theories.

     
  19. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    Please point to the facts in that quote. I see a lot of beliefs.

    To reiterate, the two models are not mutually exclusive. You can have a secure process and open source.
     
  20. guest

    guest Guest

    If they are false, prove them false. Go to the link I provided and see. They put 15 sources backing up what was stated in the whole article.

    I didn't see Pat Edmonds saying they were mutually exclusive. Point to where he said that in the article please.
     
  21. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    So if I get two pdfs with 8 sources each does that make open source +1 better?

    Their methodology is flawed. I'm in the middle of compiling my own kernel with grsec. I'll post more later.

    I will say this - they aren't false in their assertion that writing code in a secure manner (a secure process) is not important, potentially more important than being open source. That the tip of the iceberg on that subject. Everything else in that paper is a bit funny.

    edit: this one's got like 30 sources ! http://www.cs.washington.edu/education/courses/csep590/05au/whitepaper_turnin/oss(10).pdf

     
  22. guest

    guest Guest

    Do the right comparison. Pat Edmonds' article is not focusing in open source's theoretical advantages/disadvantages. It is focusing in the development model's realities of Linux and Windows Server as they simply happen to be.

    And I'm sure you can do better than calling things you disagree by this or that name.

    I'll wait for your reply playing a nice game while you compile your kernel. :D
     
  23. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    It'll likely be tomorrow. Their conclusions are based on useless information.
     
  24. guest

    guest Guest

    No problem.

    Don't forget to also prove that to be true, tomorrow.
     
  25. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    This article really reeks of disambiguation that preys on anyone who doesn't actually know the facts.

    Weren't you complaining just the other day about irrelevant information used to sway users? You can lock yourself out of Windows as well if you know how. You can break any operating system.

    Interesting that they try to insult SELinux right off the bat as it's something of a standard.

    What they're talking about here is the fact that compiling your own kernel = losing distro support. This is not unreasonable as when you compile your own kernel you are completely changing the operating system. You can run all the SELinux/ AppArmor profiles you like on Redhat and no one's going to pull out support. [ https://www.redhat.com/support/ https://help.ubuntu.com/community/Kernel/Compile ]

    This is definitely fair. True to a large extent though it's important to note that this is only true in referring to a purely equal system where the only difference in all variables is that one products source code is available. In reality, because Linux is so popular, and because of its target audience, it's difficult to say how many people are really looking at it.

    As for qualified people behind linux, let's throw away the native devs such as Linux who are looking at the code and remember the big companies behind it:
    Dell
    IBM
    Google
    RedHat

    Probably a few trillion dollars between the three of them and some of the brightest minds - not to mention that IBM has pioneered security technologies (first implemented in Linux) before.

    Linux is community based, but that doesn't mean it doesn't have massive backers.

    How many software revisions does the Windows kernel go through? I have no idea. On Linux there's millions of lines of codes added. 300,000 lines in the last kernel revision. [ http://royal.pingdom.com/2012/04/16/linux-kernel-development-numbers/ ]

    As with any large open source project there are project managers and a legitimate hierarchy of users (with Linux right at the top) and these managers are tasked with security audits. There are also outside sources (remember, anyone can see the source code so any person or organization is capable of auditing) that audit Linux. [http://lkml.indiana.edu/hypermail/linux/kernel/0006.1/0427.html] This is not the only supplementary/ complimentary auditing project that's happened.

    Useless information considering the completely different models for handling bugs in an open source project. Is the implication that Windows has fewer known bugs and is therefor less buggy? Who knows? Oh, MS employees...

    The only contrast here is that Microsoft follows the SDL. The implication is that Linux does not have a security model but in truth all this means is that they don't follow MS's.

    blah blah blah, Microsoft indulges in its admittedly and very effect SDL and their investments, which have absolutely paid off in terms of hardening the Windows OS considerably.

    Two false implications.

    1) That linux developers are not invested because it's not their job.
    This is, of course, entirely false because it completely forgets that there are companies that sell Linux products/ absolutely rely on Linux security. Again, RedHat. Or... the DoD.

    2) The assumption is that money is the best motivation and therefor a developer who programs for money is more motivated than a developer who programs for fun. This is obviously not true and a ridiculous generalization.

    Same with Linux. Bug reports are automatically send.

    Of course they're entirely correct. Thank god we can now remove IE and other various things that we have no need for. Of course, it's not nearly to the extent of linux, which lets you compile out literally anything.

    Now let's take a look at those charts.... in the next post. The kernel is compiling and this will take a while so I'll actually be up for a variable amount of time.

    I'll ask that you don't respond to any point here yet as I still have my second post and the argument will get all jumbled.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.