Re: Public disclosure of discovered vulnerabilities

From: David Brown (
Date: 05/23/05

Date: Mon, 23 May 2005 21:17:53 +0200

Bryan Olson wrote:
> David Brown wrote:
> > [...] Regarding the necessity of
> > completing a proof rather than just being certain of the existence of a
> > proof, I think it depends on the problem at hand. I've occasionally
> > done formal proofs of code or algorithms, but more often I've worked out
> > semi-formal proofs, or just thought through what would be required for a
> > proof, or made notes on the proverbial back of an envelope. In many
> > cases, I think that gives me the same certainty as a completed formal
> > proof would.
> Why would you think that when those methods have a record of
> failure? The only non-trivial and reliable proofs of correctness
> are those that are automatically verified. As with any kind of
> program, verification programs can have bugs, but programs don't
> get distracted and overlook things.
> For my own buffer-overrun bug, I could have written an alleged
> proof of correctness, making the same mistake as in the code. (I
> suspect a mathematician would be more likely to accept the proof
> than would a C language lawyer.) That's a common form of defect
> in correctness proofs: the proof overlooks the same issue as the
> program.

It's exactly that sort of risk of errors in the "proof" that lead me to
say completing the proof would give no more certainty of correctness.
But using a verification program would avoid human errors in the proof,
as long as the code in question can be proven correct by the
verification program.

> [...]
> > Checking your inputs is of course vital. However, I think buffer
> > overflows are less likely to be an issue in embedded systems as the
> > inputs are frequently within much tighter specified possible ranges than
> > on large systems getting user input, and far more likely to be of
> > predetermined known size. Communications protocols is a possible
> > exception (and my guess is that your code comes from communications
> > routines), especially if you have to deal with data (such as telegram
> > length) before it is verified.
> Telegrams? Get with the times. Many of us own half a dozen or so
> small embedded systems that decode various media files. The data
> is encoded in layers; the input might be a valid encoding of a
> valid encoding of a cleverly-designed invalid encoding.

I'm sure many of our customers would love our systems to play music - if
they could hear anything at all above the noise of machinery, or if they
were within a hundred miles of the monitoring equipment, or whatever
system they happened to be using. Embedded systems with controlled and
limited inputs greatly outnumber those which have to deal with a risk of
  intentionally invalid user data. Whatever the system, obviously, you
have to do appropriate checks.