RE: Intrusion Prevention
From: Graham Field (email@example.com)
- Previous message: Kurt Seifried: "Re: IDS Stealth Mode"
- Maybe in reply to: Golomb, Gary: "RE: Intrusion Prevention"
- Next in thread: Brian Laing: "RE: Intrusion Prevention"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
Date: Thu, 9 Jan 2003 01:50:40 +0000 From: Graham Field <firstname.lastname@example.org> To: "Golomb, Gary" <GGolomb@enterasys.com>, <email@example.com>, Rick Williams <firstname.lastname@example.org>
some thoughts inline below
>This thread has already brought up some interesting points over the past
>two weeks, so I'll make an effort to avoid repeating what has been
>stated already. We've all got enough email to catch up on from the
>holidays without being redundant! I just wanted to address the comment
>About someone saying they wouldn't look at Dragon, or ANY IDS for that
>matter, because they repeatedly do not participate in a specific test,
>is a little un-nerving.
but why mot?
>Sure, there are industry testing standards which
>are undoubtedly the most inclusive and open to peer analysis, such as
>OSEC, which I would also be suspicious of anyone who refuses to
>participate in. (We can revisit this point later.) But to take a product
>that continuously participates successfully in other magazine and
>third-party tests, but does not participate in one (or another) specific
>test... Well, for me that would raise more red-flags about the test than
>the product. If Dragon (or **any** IDS) refused all the tests publicly
>available, then of course we could draw some conclusions like you have
>alluded to. Please realize that I am not the official spokesperson for
>these types of subjects. Just someone who's close enough to have some
>thoughts on it...
so some tests are ok, but others are not ..., but which are ok and which are
>There are several issues at work when we discuss testing. First of all -
>and most generically - is that getting involved in a test is very time
>consuming. You really don't think the people who are doing the testing
>are actually taking a few months to learn the ins-and-outs of each
>product before testing it, do you? Of course not!
yes that would make sense, but the same is true for all commecrcial vendors
and open source products.
>I've only seen it done
>once, and it was by the same guys who wrote much of the OSEC testing
>standard. (Incidentally, that test took almost a year to complete!)
>Anyways, for every test we participate in, we need to send people from
>our Dev, QA, or R&D teams to assist on-site with the testing cycle.
>Those are resources that are being taken away form their normal schedule
>to assist the testing team in question. Since there are MANY tests
>completed every year, we have to carefully choose which to participate
>in, and which not to.
>Time and resources are not the only factor involved in selecting which
>tests to (or not to) participate in. Generally, we are given a
>description of the testing methodology upfront. Believe it or not,
>sometimes we're told that we cannot see the testing methodology upfront.
Is this not true for all vendors and open source products?, ater all if
someone is going to hack you will they tell you about it in advance of how it
will be done?
>This dumbfounds me for all the reasons that MJR (and others) already
>brought up. IDS testing is too easy to inadvertently (and sometimes
>intentionally - read: Miercom's test of Intrusion.com) skew the results
>of any test. When I think of testing, I think of scientific process.
>Unfortunately, many of the IDS tests you read each year do not adhere to
>any sort of process, much less an actual scientific methodology.
agreed, but this is the nature of the environment.., but the key question is
do you have a better process or testing in mind?
> If a third-party testing group tells us that we are not allowed to view the
>test process we'll be subjected to, then we will probably reject the
>offer to test - because of all the terribly flawed test plans we have
>already seen to date.
Is this not trues for any IDS available, commerial or otherwise? shuld they
not all drop out? after all, these tests only comnpare products with other
>Now, as far as the NSS test goes... This is not a free test. Not only
>does each company have to pay for the tests, they have to pay
>additionally for the reports generated by the tests. While the reasons
>for this are reasonable, this alone should raise some flags about the
>agenda of companies that drive marketing campaigns on results form tests
>like these. I'll stop here on this one.
er.., they spend money on testing and get a report that is not always
beneficial to them! What does Enterasys have against spending money on a
report that may or may not put them in a favourable light?
>Also - and more importantly, there have been issues with NSS testing
>methodologies. Rather than have my slanted (and VERY strong) opinion on
>the subject, look at the tools they use to implement their tests, then
>do a search on lists (like this one) to see some of the pros/cons of
>using those tools. Put those individual discussions together, and you'll
>get a more clear view of the bigger picture here.
I would like to see a more in depth discussion on this if anyone has any
opions on these tests and all others? (ducking ;-) )
>Anyways, I think the point has already been made in other emails, but...
>Don't base your decisions exclusively on one test -
Yes this makes sense
>it's too easy to
>introduce significant testing methodology flaws into a test; ***DO
>NOT*** solely base your decision on test results that are given to you
>from a vendor; and if you have any doubts - test it yourself or ask for
>other end-users' experiences on a list like this. There are things like
>stability, support, and the ability to effectively integrate into your
>environment that frequently cannot be discovered without your own
>testing. Hopefully the vendors' will respect your question enough to not
>skew the conversation. (Right Simon?!)
a well reasoned and thought out response.