Re: X68-64 buffer overflow exploits and the borrowed code chunksexploitation technique

From: Carlos Moreno (
Date: 10/07/05

Date: Fri, 07 Oct 2005 17:16:00 -0400

Douglas A. Gwyn wrote:
> Carlos Moreno wrote:
>>... I get a more
>>peaceful sleep knowing that if I make the mistake, I get a
>>segfault and core dump, instead of overwriting some internal
>>kernel buffers.
> But if it's a life-support system the accidental buffer
> overrun might be more benign than a total system shutdown.
> It is certainly better to have neither, and that requires
> more care in the design and construction that is commonly
> seen. How do we encourage improvement in that area? Not
> by promoting bogus solutions that don't even try to
> address the sources of the problems.

It depends on how you look at it.

The thing is, I don't operate machinery that could be
potentially harmful and even fatal, without the "safety
switches" on -- sure, ideally, I should *not* make any
mistake while using such machinery; but if I do make
one, I'd rather the cost be low (e.g., 200 hours of
work going to the wastebasket, or 1000$ worth of
equipment being destroyed), rather than high (living
without half my right arm for the rest of my time,
or even a trip to the morgue that same day).

Ideally, drivers should be competent and responsible,
and never drink and drive, and never drive at excessive
speeds, etc. But I do prefer driving *always* with
the seatbelt -- *if* I do make one mistake or a driver
next to my car makes one bad mistake, I'd rather have
the seatbelt save my life, rather than being one more
number to the statistics of "How Darwin was right".

Talking about software security, there are many fronts
to cover: one of the fronts is: "Fail gracefully",
which can be rephrased as "reduce the cost of failures".
I do *not* want the system to fail, and I *want* to
make the biggest and most diligent effort to avoid
the possibility; but I also have to know that
regardless of the amount of effort that I put, the
probability of failure *is always greater than zero*.

If I have something that unconditionally protects
me from the damage caused by a failure, then by all
means I won't say no arguing that the real solution
is not to have failures -- I'll still try not to
have failures even if I reduced the security cost
of having failures, because there are other reasons
to avoid failures (as you pointed out with your
example). If the *only* consequence of buffer
overflows was the security hole, then I'd completely
forget about that when writing software, and use
hardware protection that prevents buffer overflows
to be exploited (*if* that is possible -- I guess
at the present time, it's not?)