Re: Russ Cooper's AusCERT Presentation on MS Security Bulletins
From: Steven M. Christey (coley_at_MITRE.ORG)
Date: Fri, 4 Jun 2004 17:46:09 -0400 To: NTBUGTRAQ@LISTSERV.NTBUGTRAQ.COM
Just a couple comments on the quantification of vulnerabilities. I
could write volumes on the topic :) but I'll try to keep it brief.
On counting vulnerabilities
1) I agree that counting advisories has limited value, especially when
comparing across vendors. Three main reasons are:
(a) Each vendor has their own criteria for when to publish
(b) Advisories can have multiple issues
(c) The same vendor can release multiple advisories for the same
issue (the Linux vendors seem to be doing this much more
See  for more commentary.
2) Everybody "counts" vulnerabilities differently. In addition, each
disclosed issue(s) will have varying levels of detail, which can
affect whether one says if there's one vuln. or multiple vulns.
See  for more commentary.
3) A quantitative comparison of vulnerabilities should use a
normalized data set in order to be reasonably repeatable, and for
someone to understand the underlying bias of the normalization.
4) One of my goals for CVE has been to allow it to be used for
normalizing vulnerability data. CVE's content decisions 
attempt to provide a repeatable means for "counting"
vulnerabilities, across the space of *all* public vulnerabilities
and *all* vendors and *all* levels of detail. At first (and second
and third) glance, these content decisions may be somewhat
counter-intuitive. However, experience over the years has shown
that they can be repeatably applied by other parties, and I
estimate that only 3% of puclicized issues still require some
Based on CVE's statistics, approximately 15% of all reported
vulnerabilities are "counted" in different ways by different
Internal consistency is an important goal for many of the
vulnerability database owners that I know. Therefore, most
well-known, publicly accessible data sets could be used to
normalize the data, not just CVE.
Why the number of issues isn't shrinking rapidly
Here's part of my theory.
1) There are more new variants of old issues. Few people seem to
understand that these days, there are several variants of "buffer
overflows" that were not known to exist a few years ago. Thus the
space of possible vulnerabilities has increased, and we now have
the term "classic overflow" to distinguish the old standby from the
new variants, most of which are only found by top researchers.
An extremely brief outline of vulnerability types, which includes
new variants of old issues (e.g. 20+ variants of directory
traversal), is in 
2) New vulnerability classes, or attack strategies, are discovered on
a fairly regular basis. For example, the security implications of
integer overflows and signedness errors (documented in most modern
secure programming books including Howard/LeBlanc) have only become
understood in the past couple of years. Same thing with off-by-one
3) New classes of software are receiving greater attention than
before. The emphasis used to be on servers, but now there is
increasing focus on clients (which, shock of all shocks, have many
of the same kinds of bugs that used to plague servers) and other
types of software.
4) The top researchers continue to improve, and the vulnerability
research community, as a whole, has grown by leaps and bounds in
recent years. There are more researchers, using better tools and
techniques, investigating a wider variety of vulnerabilities.
Suite-based testing, a la University of Oulu's PROTOS, can find
dozens or hundreds of vulnerabilities in many products, which is an
Judging product security using alternate measurements
I'd like to toss out a crazy suggestion: that one reasonable
*qualitative* measurement of product security might involve
- (a) understanding the complexity of the bugs that are found in
- (b) the expertise of the researchers who are finding the bugs
- (c) the amount that the software has been analyzed in the past
However, you still can't be sure if the product's vulnerability
history reflects its overall security, because (1) there aren't any
standards to assure minimum, repeatable, consistent results that are
independent of the researcher [though OWASP is moving in this
direction for web apps], (2) researchers only publish when they find
something, and (3) the current terminology of vulnerabilities is
insufficient to capture bug complexity (e.g. the increasingly generic
"buffer overflow" term).
For the most part, many of the most serious vulnerabilities are only
found by a handful of highly skilled individuals or organizations,
most of whom have a particular research specialty (whether OS, bug
type, or software type).
If you have a piece of software in which an obvious classic buffer
overflow and ".." directory traversal is discovered, then maybe it
shouldn't compare favorably to another piece of software whose only
recently published hole is by a top researcher who had to invent a new
vulnerability class to exploit it.
 "Re: Funny article"
Steve Christey, Bugtraq post, November 13, 2003
 "CONTENT DECISIONS" chapter in "A Progress Report on the CVE
Initiative," by Robert Martin, Steven Christey, and David Baker.
 "RE: The new Microsoft math: 1 patch for 14 vulnerabilities, MS04-011"
Steve Christey, Full-Disclosure post, April 15, 2004
 "Re: Vulnerability Auditing Checklist"
Steve Christey, secprog post, May 4, 2004
NTBugtraq Editor's Note:
Want to reply to the person who sent this message? This list is configured such that just hitting reply is going to result in the message coming to the list, not to the individual who sent the message. This was done to help reduce the number of Out of Office messages posters received. So if you want to send a reply just to the poster, you''ll have to copy their email address out of the message and place it in your TO: field.