Bug turnover vs. bug accumulation?

Discussion in 'other security issues & news' started by Gullible Jones, Aug 10, 2012.

Thread Status:
Not open for further replies.
  1. Because I'm interested in the quirks of maintaining legacy systems...

    Say that some segment of code contains a vulnerability. An update fixes the vulnerability. Later, another vulnerability is discovered in the updated code.

    What are the chances that the original code did not contain the new vulnerability? How does the chance change with the length of the code segment in question?

    IOW: does the list of known security holes tend to lengthen ad infinitum in legacy software? Or is there a significant rate of turnover, with bugfix and feature updates introducing their own set of bugs? Is there a steady turnover as new bugs replace old... Or does the number of bugs usually dwindle, as the software asymptotically approaches its theoretical limit of good engineering?

    Also: do vulnerabilities tend to be distributed evenly in most code bases, or do they tend to concentrate in certain parts of the code? If the latter, what's the effect on bug accumulation and turnover?
     
  2. jna99

    jna99 Registered Member

    Joined:
    Apr 18, 2012
    Posts:
    94
    Location:
    127.0.0.1, Netherlands
    I'm not sure if I personally can give you advice but I've found a short article that summarizes, more or less, what you can do to minimize bugs. anyway, here's a link for a quick overview of which you have thought of yourself probably.
    Actually this link shows what most people eventually come to think of and resembles just common sense about the approach of bug fixing. Sorry if I can't be of any more help. maybe someone else has some good experience with bug fixing in general and give you some solid advice.

    http://www.nonhostile.com/howto-fix-a-bug.asp
     
  3. jna99

    jna99 Registered Member

    Joined:
    Apr 18, 2012
    Posts:
    94
    Location:
    127.0.0.1, Netherlands
    I've found another very interesting article about bug fix time prediction models or bug fix procedures.
    The title of this article is "Characterizing and Predicting Which Bugs Get Fixed:
    An Empirical Study of Microsoft Windows". its a PDF file.

    Maybe somehow interesting to read in relation to your question of bug turnover vs. bug accumulation. it is an official article from stanford university and microsoft research.

    http://research.microsoft.com/pubs/118790/guo-icse-2010.pdf
     
  4. BrandiCandi

    BrandiCandi Guest

    Gullible Jones, I'll point you to a very similar discussion from another forum a while back. Ms. Daisy (not the OP) had fundamentally the same question as you.

    http://ubuntuforums.org/showthread.php?t=1959614

    Pay attention in particular to the posts by Dangertux. If you read all his posts in that thread I think you'll find your question thoroughly answered.

    Essentially the result is that you must do some risk management to understand the old vulnerabilities, the new vulnerabilities, whether it's worth updating, and whether you can defend against the vulnerabilitity without updating.
     
  5. jna99: Thank you, I'll get to that at some point... :) Does look like interesting reading though.

    BrandiCandi: Thank you as well. I'm not sure how much I trust the opinion of "Dangertux" (though he is apparently an IT security consultant) but that's basically the line I was thinking along - that the "absolute" security advantage of keeping software up to date might depend on the nature of the software itself.

    (OTOH it's worth noting that "security through obscurity" is often a very legitimate kind of security. It's why SUID scripts are prohibited in modern UNIXes after all. I could see cases where it would be better to have five potential exploits than one known ITW one... Which Dangertux did point out though, re "doing research.")
     
  6. TheWindBringeth

    TheWindBringeth Registered Member

    Joined:
    Feb 29, 2012
    Posts:
    2,084
    Once upon a time I was asked to port $software to a new board. Not being familiar with $software, I walked through everything. I saw a poorly documented driver that manipulated a chip incorrectly. Built it as is and it seemed to work. Built a corrected driver and that seemed to work. Talked to the guy who worked with $software before me. He inherited the code from the original author and only made some minor modifications to that driver without noticing what I did. I ran a prolonged test of the original driver and my driver. The original worked fine but mine had occasional errors. I tracked down the person who designed the chip. He said there was a bug in the chip that was never fixed (one that was not mentioned in an Errata Sheet or Application Note) and confirmed that the mistake I saw would work around that. It was at that point he said "now that we're talking about this I think I remember talking to someone at your company about this before". Per his notes he had talked to the original author of $software.
     
  7. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,148
    Dangertux is very intelligent.

    This is very language specific but for a typical program you're going to have issues when moving data into or out of a buffer, flushing buffers, loops, anywhere a user can input (again, a buffer.)
     
  8. jna99

    jna99 Registered Member

    Joined:
    Apr 18, 2012
    Posts:
    94
    Location:
    127.0.0.1, Netherlands
    Sure, no problemo. Still very interesting to see how a big corp. like microsoft are 'dealing' with bugs and if some bugs get fixed at all, depending on various factors, like geography of people involved (same building) or trust level between various people, etcetera.
    Maybe for you personally not so very relevant, but somehow you get a picture of a huge corporation dealing with bug fixing of not so much legacy software, but more at current software being actively used by various people.

    In the end of the article there are several references to books and articles.
    If you type "bug fix time prediction model" in google you'll get some interesting studies/articles on the subject. very interesting.
    Anyway, good luck with your approach to bug fixing or updating legacy software or beefing up security on legacy software. :)
     
  9. BrandiCandi

    BrandiCandi Guest

    Um... no. There is no security through obscurity. There is FALSE security through obscurity if that's what you're going for.

    The point is this: if you want to do your own update management, then you need to understand each vulnerability and determine the risk associated with it.
    - If there's a canned exploit for a vulnerability in Metasploit (for instance), then it would be given a higher risk rating.
    -If a vulnerability would require an attacker to develop his own exploit, then the risk can be rated lower as it would probably only happen in a targeted attack (and I'm willing to be that no one in this thread is going to be targeted like that). That's not security through obscurity. It's understanding the liklihood of an attack through any given vector.
    -If it's a zero day, then no patch exists so you can disregard it in your assessment. The risk will still be there but there's nothing you can do about it, so why worry about it?
     
  10. I was thinking security vs. malware and automated attacks. But I get your point; if malware fails to install most of the time because my vulnerabilities simply aren't the ones that are in vogue, I'm not secure.

    I see. This really boils down to knowing one's system inside and out (which means I have quite a ways to go :eek: ).

    Hmm. Because depending on your situation there might be something you can do to mitigate it, perhaps? On a desktop anyway... Not sure about a server.
     
  11. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,148
    Consider this. The cost of exploiting a known vulnerability that's already been patched is usually just a matter of reverse engineering the patch (if there isn't already exploit code available) - essentially there's very little cost attributed to this.

    The cost of finding a new vulnerability in the patched code is going to be higher. Even if a vulnerability is found the attacker has to figure out how to make it exploitable all on their own or wait for someone else to do it.

    So even if you do patch one vulnerability and add another the cost of exploitation is higher.

    And I don't think this is very likely.
     
  12. BrandiCandi

    BrandiCandi Guest

    @Gullible Jones:

    Yeah, I've got a really long way to go myself.

    And yes, I would agree that there are definitely other ways to mitigate zero days, in servers, desktops, everywhere. You can use some kind of application confinement (apparmor, selinux, Mandatory Access Controls on windows), firewalls, reduce attack surface, etc. etc. etc. If you can understand the vulnerability then you can find a way to mitigate whatever an expoit would take advantage of. (Actually I assume that you would already be doing those things already especially on a server) My point was simply that if there's no patch available, then there's no point in considering how updating would affect it. Because it won't.
     
    Last edited by a moderator: Aug 13, 2012
  13. BrandiCandi

    BrandiCandi Guest

    I would agree. Install the patches as they come out because vulns that have been patched are easier to exploit.

    I guess the real point is that an unknown vuln is unknown until someone discovers it. So patch management/updating is a constant thing. You're never done.
     
Loading...
Thread Status:
Not open for further replies.