Testing Manifestos [WAS: OSEC, ICSA, Cats and Dogs, etc.]

From: Greg Shipley (gshipley@neohapsis.com)
Date: 01/10/03

  • Next message: Aditya: "RE: IDS Stealth Mode"
    Date: Fri, 10 Jan 2003 00:39:55 -0600 (CST)
    From: Greg Shipley <gshipley@neohapsis.com>
    To: focus-ids@securityfocus.com
    
    

    [Warning: this one is long, folks, my apologies in advance....]

    I've given a fair amount of thought in my "free time" to the recent OSEC /
    Intrusion-Prevention thread (and in particular, Marcus's post) and I
    wanted to pass on a few ramblings. I'm going to try to focus more on the
    philosophy behind testing here, and less on the names. I realize that
    much of my earlier post on testing efforts, specific certs, etc., could
    have come across as very "mud-slingy'ish," which honestly was not my
    intention (although I'm certainly guilty of sending some zingers once in a
    while). In addition, I think that the mentioning of vendor/company names
    really took the heat up a notch and detracted from the true points of the
    discussion.

    Ironically, *I* was triggered by this very scenario in the suggestion that
    OSEC was similar to what ICSA was doing, and while I still stand by my
    statement that OSEC is much different my knee-jerk reaction is a topic for
    social analysis in possibly another forum, at another time.... :)

    Second, Marcus, I appreciate hearing many of your views from the vendor's
    perspective. I've never worked for a product company, so while I can
    empathize with many of your positions, I probably can't relate to them at
    the same level as some of the other "vendor folks" on this list. Your
    opinions on such matters are obviously insightful, and I appreciate you
    sharing them. Thank you.

    However, with that said I invite you to consider my position as a public
    reviewer. I will humbly suggest that my world is a bit different then
    yours, with different drivers and different goals, and will attempt in
    this e-mail to explain why. Here goes...

    --------------

    First, I think it is important that consumers identify that there are
    MULTIPLE forums in which products are reviewed, each with its own set of
    dynamics. The big points I think people should make note of are:

    A. Who funded the test?
    B. Who defined the criteria?
    C. Does the testing methodology make sense?
    D. Were the tests executed properly?
    E. Do the testers have a good reputation / have they *successfully* done
       this type of testing before?

    Taking a sub-set of answers to those questions, some of the more common
    combinations our industry sees:

    1. Vendor performed benchmarking based on vendor-defined criteria: vendor
    does the tests, vendor publishes the results.

    2. Vendor-sponsored, 3rd-party performed bake-offs based on vendor-defined
    criteria: vendor funds the tests, 3rd-party performs the tests, vendor or
    3rd party publishes the results.

    3. Reviews (bake-offs) in trade publications w/ light testing: funded by
    the trade publication, internal staff writes an article based on little or
    no real testing, trade publication publishes the results.

    4. Vendor-sponsored, 3rd-party performed bake-offs based on 3rd-party
    defined criteria: vendor funds the tests, 3rd-party performs the tests,
    3rd party publishes the results.

    5. Reviews (bake-offs) in trade publications w/ heavy testing: funded by
    the trade publication, 3rd-party (or internal staff) perform the tests,
    trade publication publishes the results.

    ....the list goes on. My personal opinion is that anything that is a)
    totally vendor sponsored, and b) has criteria that is defined solely by
    one vendor, is instantly suspect. In fact, I often wonder why ANYONE
    gives ANY weight to a test of a single product by a supposed "objective
    3rd-party" when it is vendor sponsored, and vendor defined. I mean, come
    on - honestly, like we're going to see the report if the tests DON'T come
    out positive for that vendor? Right....

    I'd like to believe the consumer base is a little smarter then this, but I
    digress...

    ---------------------

    Second, I found the following statements interesting:

    On Mon, 30 Dec 2002, Marcus J. Ranum wrote:

    > With respect to industry certification efforts - that's a tricker
    > matter. The objective is to set a bar and continually raise it. It
    > flat-out doesn't work if you start with the bar too high.

    I would humbly suggest that this is entirely based on the motives for the
    "certification" effort. For example, if I'm launching a certification
    effort and I want EVERYONE to pass (or have a hope of passing), I will
    inevitably be reduced to lowest-common-denominator criteria definition.
    There simply isn't a way around this. In particular, if the participating
    vendors are driving the criteria, the least-functional, lowest-performing
    vendor will drag the whole thing down.

    So the "certifier" has a choice: exclude that bottom tier, or risk not
    achieving consensus. It's a tough spot. I'VE BEEN THERE. And in the
    case where you want everyone to participate, you are absolutely right: you
    set the bar low, and you continually inch it up. I will also suggest that
    this is one way of trying to nudge a particular product space forward,
    albeit very slowly, and with little effect over long periods of time.

    HOWEVER, I will *also* suggest that there is another approach: set the bar
    high, with the understanding that not everyone is going to achieve it.
    Those that do, have bragging rights to the areas that they achieve. Those
    that don't, well, don't.

    But the bottom line is that you don't HAVE to set the bar low. I will
    agree that most efforts have gone that way, however. I will let this list
    take it from here...

    > What I gather you're trying to do with OSEC is test stuff and find it
    > lacking or not. Basically you want to say what products you think are
    > good or bad - based on your idea (with input from customers and vendors)
    > of good and bad.

    Er, close, but no, and perhaps I am at fault for the confusion.

    What OSEC does is VERIFY certain aspects of a product. There are no GOOD
    and BAD ratings. The *test* is not pass/fail, the *criteria points* are
    pass/fail. For example in OSEC NIDS v1.0, test E7 uses "HTTP (500 Mbps,
    536 MSS)" as background traffic, uses 10,000 addresses (which does some
    table flexing), at approximately 116k pps. HOWEVER, if a vendor doesn't
    market their product to run at 500+Mbps (and under similar traffic
    profiles), this test is irrelevant.

    Further, if a consumer doesn't need something to inspect traffic at those
    speeds, the test is irrelevant TO THEM. So while I will assume that these
    criteria points are relevant to SOME PEOPLE, the criteria is not based off
    of a "all of these points are good" or "if you don't pass this, you suck"
    mentality. It's far more intelligent then that.

    And *unlike*, say, a Network Computing article, there are no "soft
    analysis" angles to these tests - they are simply sensor tests, and they
    only measure/verify SOME of the many components to NIDS solutions.

    > Of course, if I were a vendor, I'd skewer you as publicly and often as
    > possible for any bias I could assign you. Because your approach is
    > inherently confrontational.

    "Confrontational" - that's an interesting word to choose. Does the
    approach validate on a yes/no level? Yup - I would argue that's what
    performing good testing is about. So if that's what you mean by
    confrontational, um, yeah, absolutely - it's "confrontational."

    > Back when I worked at an IDS vendor, I tried to talk our marketing
    > department out of participating in your reviews because, frankly, the
    > vendors are forced to live or die based on your _opinion_. That's nice,
    > but we've seen before that opinions of how to design a product may vary.
    > Many industry expert types "Don't Get This" important aspect: products
    > are often the way they are because their designers believe that's how
    > they should be. Later the designers defend and support those aspects of
    > their designs because that's how they believe their products should be -
    > not simply out of convenience. The egg really DOES sometimes come before
    > the chicken. :)

    There is a smile on my face, as I now sense that I've been doing my job.
    :) So many things to respond to in this one....

    For starters, I typically do not try to review products based on how they
    are designed. If I've mislead people in the past on this point, that's my
    bad, and I need to correct this. But honestly, I don't really care that
    much how products are designed - I care about how effectively they address
    the needs of their customers. IMHO, my job, as a reviewer, is to
    objectively test products and use quantitative results to come to
    qualitative conclusions.

    I could be wrong here, but I believe readers follow publications like
    Network Computing because the authors *DO* have opinions. They *do* rate
    things based on a given criteria, and offer readers advice on purchasing.
    The final opinions are, absolutely, opinions - but they are based on
    criteria, and hard, objective testing - not some engineering feat that
    someone pulled off with a piece of silicon. (Although that can be cool,
    and we will write about it!)

    As a side note, as a CTO (no, it's not just a title) of a consulting firm
    I am tasked with keeping an eye on numerous production systems with 3
    Internet points-of-presence spread across 3 (US) states. I *am* a
    consumer of security technology, so while I do have my opinions, they are
    also based on very real and tangible business needs. While my needs
    aren't the same as those of some of our F500 customers, they are often
    similar. This also factors into the criteria I/we use. I am a consumer,
    and I work with consumers, too.

    Finally, I'm not sure that the vendors "live and die" by an opinion - it's
    just one opinion. They live and die by successfully running a company and
    meeting client needs. Taking a slight tangent, there is a scene in the
    movie "The Contender" where a senior Congressman advises the junior
    Representative to "cross out the word objectivity."

    "Your constituents want you for your opinion, for your philosophy, for
    your subjectivity."

    As odd as this may sound, I think much of that applies to a good product
    reviewer...as long as he or she is clear why/where they have gathered
    their opinions from.

    (Side note: I highly encourage the watching of "The Contender"
    back-to-back with "The Big Lebowski." There is something about watching
    Jeff Bridges playing the President of the United States and The Dude, in
    the same day, that just warms my heart...)

    > So here's the problem: how do you test a product without holding any
    > subjective beliefs in your criteria? Man, that's hard. I wish you luck.
    > (By the way, I've noted a very strong subjective preference on your part
    > to Open Source solutions, most notably Snort. You've consistently cast
    > their failures in kinder light than anyone else's, etc... So be
    > careful...)

    Ok, now this just cracks me up. ISS RealSecure wins a product review in
    like 1999, and I get accused (on this list even!) of being biased towards
    ISS. Years later Enterasys Dragon "wins," and, gee, now Greg is biased
    towards Dragon. And we've got Cisco gear here at Neohapsis! Man, those
    Neohapsis guys are in bed with Cisco!! Yeah! Yeah! Look at all of that
    gear!

    I can't win.

    I find it particularly amusing at how "biased" I am...but Snort?
    Snort??!?!? I mean, sure, I like Snort as much as the next NIDS, but
    heck, Snort hasn't even "won" any past Network Computing reviews. So ya
    totally lost me on this one....where did this come from?

    Another side note: Sometimes I think people confuse "objective" with "not
    having an opinion." I had a rep from a vendor e-mail me once because they
    were told by somebody that I supposedly said something negative about
    their product. I was asked "I thought you were vendor neutral?"

    I may not be in bed with any vendors, but I am entitled to have
    opinions...sheezus....but I digress again....

    > behind the scenes. I was pretty impressed. The thing that impressed me
    > the most was getting a bit of the inside skinny on how many vendors
    > passed the test the first time (many have failed DOZENS of times) and I
    > thought that was cool. Obviously, it'd be best if all products going
    > into the test were perfect going in. But I'd be happy, as a vendor or a
    > customer, if they were BETTER coming out.

    Agreed - and I fear that this is the point that got lost in my last
    e-mail: they *are* different. So are the NSS tests/results. But most of
    them have some value. It all depends on what you want to get out of the
    "results."

    > So, I think there's a place for *ALL* these different tests and it's a
    > bad idea to throw mud at any of them.

    Agreed, good point, and I apologize for confusing things. However, I do
    think it is within our rights as professionals to point out misleading
    tests, misleading results, and things that are generally just not on the
    mark.

    Unfortunately I think there are more bad testing efforts than good ones,
    primarily as a result of what you pointed out earlier: it is HARD to do
    this stuff right. But on this front, I *am*, absolutely, biased. :)

    > Honestly, I think that a smart customer should do their own. It's
    > flat-out INSANE to spend more than $100,000 on a product without doing
    > an operational pilot of it and 2 competitors. Yet, many companies do
    > exactly that. They get what they deserve.

    *nod* Another great point: pilot efforts are essential.

    Good thread...but I'm biased. :) I hope some of this is useful.

    White-Russians optional,

    -Greg



    Relevant Pages

    • Re: WHERE clause applies to right-hand table of LEFT JOIN
      ... > This may not be a vendor specific issue at all and if not, ... Your question is specific to Interbase but I'll try to shed some light on ... The field for my filter ... > criteria is from the right hand table of a LEFT JOIN. ...
      (microsoft.public.sqlserver.server)
    • Re: WHERE clause applies to right-hand table of LEFT JOIN
      ... > This may not be a vendor specific issue at all and if not, ... Your question is specific to Interbase but I'll try to shed some light on ... The field for my filter ... > criteria is from the right hand table of a LEFT JOIN. ...
      (microsoft.public.access.queries)
    • Re: Display Search Results in Form
      ... It shows how to build a filter string from whichever boxthe user typed their criteria into, and then apply that as the Filter of the form. ... So I have a table (tblVendors) with 5 fields: ... click the Command Button and get the proper results. ...
      (microsoft.public.access.formscoding)
    • Re: Query or filtered recordset as Record Source
      ... By the way, vendor names are unique, although they are not used as the PK ... in your tblVendor that meet the selection criteria. ... You could also create a query that selects on the =True field, ... the Record Source on the property sheet rather than hunting through the ...
      (microsoft.public.access.tablesdbdesign)
    • Re: Why eEye Retina (was MBSA scanner)
      ... Each vendor has their target market. ... Beware of their marketing departments. ... personal opinions just fire me a mail and I'd be happy to discuss. ... to facilitate one-on-one interaction with one of our expert instructors. ...
      (Pen-Test)