Re: Russ Cooper's AusCERT Presentation on MS Security Bulletins

From: Steven M. Christey (coley_at_MITRE.ORG)
Date: 06/04/04

  • Next message: Russ: "Administrivia #30621 - 6th Annual NTBugtraq Retreat Details"
    Date:         Fri, 4 Jun 2004 17:46:09 -0400
    To: NTBUGTRAQ@LISTSERV.NTBUGTRAQ.COM
    
    

    Russ,

    Just a couple comments on the quantification of vulnerabilities. I
    could write volumes on the topic :) but I'll try to keep it brief.

    On counting vulnerabilities
    ---------------------------

    1) I agree that counting advisories has limited value, especially when
       comparing across vendors. Three main reasons are:

       (a) Each vendor has their own criteria for when to publish
           advisories

       (b) Advisories can have multiple issues

       (c) The same vendor can release multiple advisories for the same
           issue (the Linux vendors seem to be doing this much more
           frequently nowadays).

       See [1] for more commentary.

    2) Everybody "counts" vulnerabilities differently. In addition, each
       disclosed issue(s) will have varying levels of detail, which can
       affect whether one says if there's one vuln. or multiple vulns.

       See [3] for more commentary.

    3) A quantitative comparison of vulnerabilities should use a
       normalized data set in order to be reasonably repeatable, and for
       someone to understand the underlying bias of the normalization.

    4) One of my goals for CVE has been to allow it to be used for
       normalizing vulnerability data. CVE's content decisions [2]
       attempt to provide a repeatable means for "counting"
       vulnerabilities, across the space of *all* public vulnerabilities
       and *all* vendors and *all* levels of detail. At first (and second
       and third) glance, these content decisions may be somewhat
       counter-intuitive. However, experience over the years has shown
       that they can be repeatably applied by other parties, and I
       estimate that only 3% of puclicized issues still require some
       interpretation.

       Based on CVE's statistics, approximately 15% of all reported
       vulnerabilities are "counted" in different ways by different
       sources.

       Internal consistency is an important goal for many of the
       vulnerability database owners that I know. Therefore, most
       well-known, publicly accessible data sets could be used to
       normalize the data, not just CVE.

    Why the number of issues isn't shrinking rapidly
    ------------------------------------------------

    Here's part of my theory.

    1) There are more new variants of old issues. Few people seem to
       understand that these days, there are several variants of "buffer
       overflows" that were not known to exist a few years ago. Thus the
       space of possible vulnerabilities has increased, and we now have
       the term "classic overflow" to distinguish the old standby from the
       new variants, most of which are only found by top researchers.

       An extremely brief outline of vulnerability types, which includes
       new variants of old issues (e.g. 20+ variants of directory
       traversal), is in [4]

    2) New vulnerability classes, or attack strategies, are discovered on
       a fairly regular basis. For example, the security implications of
       integer overflows and signedness errors (documented in most modern
       secure programming books including Howard/LeBlanc) have only become
       understood in the past couple of years. Same thing with off-by-one
       errors.

    3) New classes of software are receiving greater attention than
       before. The emphasis used to be on servers, but now there is
       increasing focus on clients (which, shock of all shocks, have many
       of the same kinds of bugs that used to plague servers) and other
       types of software.

    4) The top researchers continue to improve, and the vulnerability
       research community, as a whole, has grown by leaps and bounds in
       recent years. There are more researchers, using better tools and
       techniques, investigating a wider variety of vulnerabilities.
       Suite-based testing, a la University of Oulu's PROTOS, can find
       dozens or hundreds of vulnerabilities in many products, which is an
       unprecedented scale.

    Judging product security using alternate measurements
    -----------------------------------------------------

    I'd like to toss out a crazy suggestion: that one reasonable
    *qualitative* measurement of product security might involve

      - (a) understanding the complexity of the bugs that are found in
            that product

      - (b) the expertise of the researchers who are finding the bugs

      - (c) the amount that the software has been analyzed in the past

    However, you still can't be sure if the product's vulnerability
    history reflects its overall security, because (1) there aren't any
    standards to assure minimum, repeatable, consistent results that are
    independent of the researcher [though OWASP is moving in this
    direction for web apps], (2) researchers only publish when they find
    something, and (3) the current terminology of vulnerabilities is
    insufficient to capture bug complexity (e.g. the increasingly generic
    "buffer overflow" term).

    For the most part, many of the most serious vulnerabilities are only
    found by a handful of highly skilled individuals or organizations,
    most of whom have a particular research specialty (whether OS, bug
    type, or software type).

    If you have a piece of software in which an obvious classic buffer
    overflow and ".." directory traversal is discovered, then maybe it
    shouldn't compare favorably to another piece of software whose only
    recently published hole is by a top researcher who had to invent a new
    vulnerability class to exploit it.

    References
    ----------

    [1] "Re: Funny article"
        Steve Christey, Bugtraq post, November 13, 2003
        http://marc.theaimsgroup.com/?l=bugtraq&m=106876505903572&w=2

    [2] "CONTENT DECISIONS" chapter in "A Progress Report on the CVE
        Initiative," by Robert Martin, Steven Christey, and David Baker.
        http://cve.mitre.org/docs/docs2002/prog-rpt_06-02/

    [3] "RE: The new Microsoft math: 1 patch for 14 vulnerabilities, MS04-011"
        Steve Christey, Full-Disclosure post, April 15, 2004
        http://lists.seifried.org/pipermail/security/2004-April/002964.html

    [4] "Re: Vulnerability Auditing Checklist"
        Steve Christey, secprog post, May 4, 2004
        http://marc.theaimsgroup.com/?l=secprog&m=108368513423653&w=2

    -----
    NTBugtraq Editor's Note:

    Want to reply to the person who sent this message? This list is configured such that just hitting reply is going to result in the message coming to the list, not to the individual who sent the message. This was done to help reduce the number of Out of Office messages posters received. So if you want to send a reply just to the poster, you''ll have to copy their email address out of the message and place it in your TO: field.
    -----


  • Next message: Russ: "Administrivia #30621 - 6th Annual NTBugtraq Retreat Details"

    Relevant Pages

    • Re: Vulnerability Auditing Checklist
      ... CAN-yyyy-nnnn) for specific vulnerabilities that demonstrate the given ... suite-based testing, and fault injection. ... symlink following bugs are the combination of multiple ... e.g. checking a security option does nothing, ...
      (SecProg)
    • RE: Oracle - the last word
      ... Besides rating a software package solely on the ... number of vulnerabilities found, it's more accurate to include the ... The bugs found in a day could have been ... Litchfield is an Oracle expert although we have never met to my ...
      (Bugtraq)
    • RE: DCOM RPC exploit (dcom.c)
      ... If you find any bugs please let us know. ... eEye Digital Security ... http://eEye.com/SecureIIS - Stop known and unknown IIS vulnerabilities ... | Do you Yahoo!? ...
      (Bugtraq)
    • Re: Paper announcement: Is finding security holes a good idea?
      ... vulnerabilities would be a much better term ... This is derived by noting that the number of bugs is very large ... since the discovery of some ... > noticeable results in terms of improved software quality. ...
      (Bugtraq)
    • Re: The Pros and Cons of Firefox
      ... >>that the defect discovery rate for IE should be about 10X that of ... >>total historical bugs in IE of course are larger than Firefox. ... As for announcing vulnerabilities Mozilla is doing similar stunts. ...
      (comp.security.misc)