distros too slow at patching?

Discussion in 'all things UNIX' started by katio, Sep 24, 2010.

Thread Status:
Not open for further replies.
  1. katio

    katio Guest

    http://people.canonical.com/~ubuntu-security/cve/
    http://security-tracker.debian.org/tracker/
    Comes in really handy for the bad guys, eh?

    It seems like many of these are already fixed upstream. Not surprisingly the rolling Debian Sid has the least open security bugs (and they don't even have a security team). For the sake of fairness it should also be mentioned that these issues are mostly medium and below, higher rated ones are fixed faster (however I've seen a bug with "Medium" on the two CVE trackers which received a "Highly critical" from Secunia...)

    It would be interesting to see such a security tracker for a roling release/source based distro as well like *BSD, Gentoo or Arch. But from some spot tests they have similar ETAs because packages are kept in "testing" or whatever they call it for a while too.

    I know stability is the argument here. But upstream does testing as well. Do we end up testing little security patches twice?
    My favourite example are the Mozilla apps. I never saw a reason for extensive testing (or even backporting) done by each distro. Mozilla releases binary builds for Linux already which work without a fuss on any distro. They have their own alpha/beta testing and what they release is arguably stable enough to basically just copy into your own repository.
    In Ubuntu Firefox updates took up to a week, now we get new point releases the next day or so and even new major versions are released instead of only backporting security fixes - all thanks to much user complaint I imagine. Therefore we know it's possible to update stable versions early and fast. Does this mean it all comes down to a flawed policy decision?

    Windows doesn't have this problem (talking about 3rd party software), when upstream releases a security update you can always get it on the same day, as a binary and tested, if not through automatic update you could use Secunia PSI 2.0. The only *NIX distro I know that handles this the same way is ironically LFS because there are no repos. There the ETA solely depends on how quick your PC is compiling the sources (and how long it takes you to figure out dependency hell).

    What's your take on this? Do you see a need for improving how security patches are handled? Or do you share the view that it's "good enough": only rush a patch if there are already exploits in the wild. In other words reactive vs. proactive security and stability > security.

    update:
    also see http://lwn.net/Articles/378865/ and http://lwn.net/Articles/404050/
     
    Last edited by a moderator: Sep 24, 2010
  2. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,221
    I prefer security patches to be tested as long as it's necessary rather than rushed, ruining more than the exploit they were expected to fix in the first place. Besides, the whole of security thingie is way overrated. Like Facebook.
    Mrk
     
  3. Eice

    Eice Registered Member

    Joined:
    Jan 22, 2009
    Posts:
    1,413
    Count me in as another user disgruntled about the state of software updates in Linux. I've never figured out why updating software in Linux has to be this taxing on user patience. The most-often touted reason I've heard is for the sake of stability, which never made any sense to me given how you can upgrade immediately to new versions of software on Win/Mac (or even on Linux with manual install/extraction, bypassing the package manager) and not encounter too much problems.

    The only explanations that holds any water whatsoever to me right now would be that Linux's infrastructure somehow creates extra problems that make all this additional testing necessary, or - like you mentioned - some flawed policy decisions were made somewhere.
     
  4. chronomatic

    chronomatic Registered Member

    Joined:
    Apr 9, 2009
    Posts:
    1,343
    I dunno, but whenever, say, Firefox releases a security fix, Ubuntu usually has it in their repos the next day (in my experience). There might be a few exceptions, but I have never thought it was unreasonably slow.

    And the binaries Mozilla releases are 32 bit ONLY. It is up to the distros to compile the code for 64 bit (a lot of people seem to forget this). Since there are lots of 64 bit users, this is why there is a bit of a delay.

    Kernel updates are better. For instance, that recent root vuln which allowed a local user to escalate to root was fixed by upstream the next day and released by the distros like 2 days after that. You're looking at a 3 day turn around time. Let's see Microsoft do that (with MS, you're waiting a month).
     
  5. Eice

    Eice Registered Member

    Joined:
    Jan 22, 2009
    Posts:
    1,413
    The last time I compiled Firefox was 2-3 years ago. That took me about an hour, or possibly less, on a low-end laptop. I don't think Ubuntu needs a day or more to compile Firefox on their blazing-fast servers.

    And even then, it's not only Firefox, but for almost everything else as well. I'm typing this on OOo 3.2.1, installed from .debs downloaded off the official site because Ubuntu refuses to release it to the repos even after almost 4 months.

    All in all, I really doubt build time has anything to do with it. For MS, they at least have the plausible excuse of having to test their fixes. I don't really understand why Ubuntu needs months to test software that has already went through an extensive alpha/beta/RC process and released as stable.
     
  6. katio

    katio Guest

    We have to discern two different aspects. One is the security patching, how long does it take till a security fix upstream is merged downstream. The other is how long does it take till a new feature release is available on a stable version release (as opposed to rolling release) based distro.

    The former is generally regarded as pretty good and I haven't heard any complaints about it. I assumed ALL security patches from upstream will be released in a more or less timely fashion, either through updating to a new version or through backporting. But now that I've seen these security trackers I'm not so sure about that, there are CVEs reaching back several years!

    About the second issue please see the LWN articles.
    Ubuntu is moving to a more Windows/Mac like approach simply because that's what the users want (=bundling libraries instead of using system libs and feature updates in stable releases). The old school approach (Debian, RHEL...) does not work for a modern Desktop system where people want the latest features of OOo, Firefox, Chrome/ium and similar programs immediately. Development on these project also moves at a faster pace these days, long term support isn't only not feasible but getting outright impossible.

    @Eice
    OOo in your example isn't updated because of testing taking so long but because of their policy: "Stable" release only get security updates, no feature updates. Period. If you want new versions as well you need to enable backports or 3rd party repos or switch to a rolling release based distro. Then the ETA really only depends on how long testing takes.

    @chronomatic about Firefox
    Reread my first post, That's exactly what I was saying...
    But have a look at http://packages.debian.org/search?keywords=iceweasel&searchon=names&suite=all&section=all The latest STABLE Firefox version is only in experimental. That's where I'd expect 4.0b6 or daily trunk, but no, even Sid is outdated. Stability is absolutely no argument for the Debian guys here. Sid and testing don't need to be stable and one of the most similar projects in its scope - FreeBSD - has the current version, in the stable tree! Fedora and Gentoo usually lag behind quite a bit (always touting how bleeding edge they are) while the much smaller Arch Linux is usually much faster if not the fastest Linux distro.
     
  7. Eice

    Eice Registered Member

    Joined:
    Jan 22, 2009
    Posts:
    1,413
    I think that was part of my point, and something I've never understood. Are the latest versions of OOo, Pidgin or some such really going to pose a non-negligible risk of destabilizing anything? That sounds rather absurd.
     
  8. NGRhodes

    NGRhodes Registered Member

    Joined:
    Jun 23, 2003
    Posts:
    2,381
    Location:
    West Yorkshire, UK
    Distros have far more testing to do than upstream (usually).

    They have to test any dependant packages for compatibility and security, they also have to reapply any custom patches (and possibly roll changes in due to upstream patches).
    Then the packages need to be produced and the upgrade process tested.
    So there is a whole lot more work that needs to be done for most distributions.

    Cheers, Nick
     
  9. katio

    katio Guest

    Two things:
    Stable is more than just "software doesn't crash randomly" - it means the GUI, config files, features stay the same. That's important for sys admins and helpdesk with hundreds or more (l)users who otherwise call in for help if stuff doesn't work like it used to do.

    New versions of flashy end user visible software also often depend on new libraries and new versions of other packages. Libraries usually have a lot of reverse dependencies so in order to update a single system library you need to test tons of software that all depend on it.


    The obvious thing to do would be to install new versions of alongside the old ones, there's already a versioning system in place to keep "dll hell" or more appropriate "lib hell" at a minimum but apparently there's still some risk and that's why distros are generally reluctant to choose this way.
    Other options are bundling, statically linking and backporting.
    Bundling means the lib is shipped together with the new program in a separate file and stored in a separate directory so it doesn't mess with system libs.
    The problem is the risk of forks (every app ships their own slightly patched lib instead of pushing their changes and improvements upstream) and security updates (instead of updating on vulnerable system lib you end up having to wait for upstream of every app that ships with it's own version of that very lib to patch it. Personally I don't think that's an issue with big software like Firefox or OOo. They are cross platform which means they already do the bundling for Mac and Win, there's lot of pressure to patch it fast and their devs are competent enough to stay on top of security announces.)
    Statically linked libs share above shortcomings, it means the lib isn't shipped as a separate file but hardcoded into the binary file.
    Backporting means you patch the software so it supports your outdated system libs shipped by your $distro stable. It's a lot of work for the maintainers and sometimes it plain doesn't work. But it still is the preferred way to do it.

    One thing to note is memory usag: Say we have three apps running:
    random app from the repo, Thunderbird and Firefox, they all depend on the same libxyz.
    With backporting all of them use libxyz1. With bundling Thunderbird and Firefox share their new libxyz2, while the other app still uses libxyz1, therefore two libs are loaded into RAM. With statically linking: random app loads sys lib, Thunderbird loads it's big executable which contains a copy of libxyz2 and Firefox does the same, the result 3 libxyz are loaded into memory.



    IMO for big cross platform software the only sensible way is to let the upstream devs a lot of freedom (=let them bundle their stuff like they do on those other platforms), "they know best", any other options puts a lot of burden on each distro and it's the only way to get new AND stable versions out fast. That's all that matters for desktop focused distros.

    For small projects backporting is comparably easy and the most correct way to do it.

    If there's something not caught by these two I'd suggest the library versioning should be improved. It's the most flexible approach and in the long run you could also minimse the bundling in cross platform packages.
     
  10. Eice

    Eice Registered Member

    Joined:
    Jan 22, 2009
    Posts:
    1,413
    katio,

    Thanks for the explanation. That makes a lot more sense than everything else I've heard so far.

    One last question, though: if new software need new, non-bundled dependencies, why can some software, e.g. Firefox and Chromium, be run simply by downloading and extracting a compressed archive? Why can some other software be installed via .deb files without enabling additional repos to download those newer dependencies? Do they just rely on older packages that exist on the user's system and hope for the best, or are the libraries they need bundled in the archive as well?

    Now that I know that slow updates are an inherent result of the way Linux handles libraries, however, it does further cement my personal dislike of the Linux update system...
     
  11. katio

    katio Guest

    Most of these do both: they have their own, usually forked or slightly patched libs which are bundled and they also rely on system libraries which are very common and found on pretty much every distro. The Linux Standard Base or "LSB" tries to ensure some binary compatibility across all Linux distros and sets some minimal standards what libs and other functionality needs to be present. When you install .debs the installer automatically fetches these dependencies from the main repo if they aren't installed already.
    Often dependencies are more forgiving, e.g. they require libxyz version 1 or later instead of for example depending on certain bugs only present in a specific point release.

    Not if you use a rolling release, on gentoo for example you get the latest stuff asap, and if that's not fast enough you can enable testing. Gentoo is still a very stable distro (if you can put up with all the compiling and a lot of configuring by hand). Arch is also a very popular option for all of those who are fed up with the slowness of "stable" releases.

    I prefer to have a stable tested system base, Debian stable, CentOS or Ubuntu LTS for example and on top only a handful backported programs where I really need new features or performance enhancements. Best of both worlds so to speak.
    It's really less about the "stability" but more about how much maintenance is required.
     
  12. linuxforall

    linuxforall Registered Member

    Joined:
    Feb 6, 2010
    Posts:
    2,137
    The word is stability, I run Ubuntu on many PCs, some of them in real testing environments and none of them have any issues, this is the reason updates are on the conservative side, of course, for those with no patience, there is myriads of updated PPA and every six months, Ubuntu brings new distro with updated packages and improvements.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.