Re: [fw-wiz] VPN endpoints

From: Marcus J. Ranum (mjr_at_ranum.com)
Date: 08/31/04


To: Devdas Bhagat <devdas@dvb.homelinux.org>, firewall-wizards@honor.icsalabs.com
Date: Mon, 30 Aug 2004 21:01:06 -0400

Devdas Bhagat wrote:
>In the information security case, this is generally numbers pulled out
>of thin air.

It's worse than that, actually. In a lot of cases the numbers _appear_ to
have been collected by "scientific" means. The problem is that they
actually aren't. For example, a lot of practitioners base their pitches
to management on surveys (such as, for example, the CSI/FBI survey
or the CIO magazine survey on security) - a lot of these surveys are
fundamentally flawed. They yield results but it's hard to say what the
results actually _measured_.

Specifically, many security surveys are based on self-selected
samples (e.g: "polls"). When you do a poll, what you're doing is
asking "Please fill this out." But there are a lot of assumptions
that get dropped on the floor. :( What you're really measuring is:
        - How much the person cared about the topic (motive to respond)
        - How honest the respondent is (hard to verify)
        - Other factors (hard to predict)
I'm sure nobody on this list has ever filled out one of those surveys
from a magazine in which they asked you your job position, whether
you were a decision-maker, company size, etc... And I'm sure you
all fill them out EXACTLY right. I used to enjoy periodically asserting
that I was the CEO of a 1 person company with a $4,000,000 IT
budget (well, a guy can dream, huh?) Unfortunately, sometimes
that stuff gets used for serious purposes instead of just marketing.
But a more typical example of a self-selected sample would be the
CIO survey - they polled CIOs about security expenditures and
concluded that 60% felt their security $$ were in the right ballpark
(or something like that). What they actually should have been able
to conclude was:
        "60% of the people who cared enough to fill out our survey
        and who claimed they were CIOs felt..."
That's not as strong a statement of your "result" but that's science
for you. ;) Every time I try to explain this issue to one of these
self-anointed pollsters the response is usually "but the information
is BETTER THAN NOTHING." Unfortunately, it's not true because
        a) it's a disservice to science
        b) it might be so inaccurate that it's a disservice to publish it
        c) regardless of the sample size, inaccuracy may be a
                constant if the bias is also constant
        d) the only way to measure for bias is to do the survey RIGHT
                and compare the results
Because of point d) above, it's just not worth doing it wrong - it's
only cheaper.

My favorite example of how a self-selected sample can screw you
up is a website that had a marketing poll about "would you buy this
service if we offered it?" The poll had an overwhelmingly positive
response and a good sample size. So they went ahead with the
project and it was a complete flop. They went back and started trying
to figure where the poll went wrong and determined that the sample
(being self-selected) actually measured the number of people who
were interested in taking the poll, not the topic being polled. They
discovered that 85% of the respondents were unemployed housewives;
that's a significant biassing factor. Many of the polls regarding security
probably measure a bimodal distribution between:
        - the execs who give a sh&t enough to fill in some dumb poll
        - mid-level engineers who fill in what they think their bosses
                should think

Anyhow, I know Devdas' posting did not officially invite this rant,
but it's a topic that really drives me nuts and I thought I'd take
advantage of however tenuous a lead-in he gave me. :) Thanks
Devdas! :)

mjr.

_______________________________________________
firewall-wizards mailing list
firewall-wizards@honor.icsalabs.com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards