Re: Gartner is Dead, nCircle, Fusion, asset-correlation--was-->False positives, negatives and don't cares
From: Martin Roesch (roesch_at_sourcefire.com)
Date: Tue, 12 Aug 2003 13:19:57 -0400 To: email@example.com
> # my thoughts about data quality and event value coming out of NIDS.
> Ohhh, *data quality* and *event value*, now we're talking...
> I think you're spot on about the confusion regarding false positives
> and non-security events, etc. I think a lot of us fully agree with you.
> I know _a_lot_ of people out there in the real world still don't understand
> this, and if they do, they don't have the time/skill to properly tune
> NIDS, correlate events, etc. etc. etc.
> The Gartner claim is essentially "IDS is dynamic and hard to make
> work; if we move this function to static perimeter access controls which
> most people manage successfully, things will be easier."
> There's a lot of problems with that claim, but I've got two big complaints
> about NIDS which Gartner didn't touch:
> 1. Lack of security event correlation to asset value.
True. Unfortunately, defining asset value is one process that can't help
but be manual. I suppose you could use some sort of behavioral analysis to
locate heavily used servers on a network, but to date I don't know of anyone
outside Arbor who has the technical infrastructure for that sort of thing
> 2. Lack of value in an Enterprise using predominantly encrypted
> channels of communication (I just ran into this one in a big way).
This is a tough one and the place where behavioral and statistical methods
start to shine. There is infrastructure under development within Snort and
other tech that Sourcefire is developing that will establish the
informational basis for doing these sorts of things in encrypted
environments, I have no doubt that others are looking at similar ideas.
> # Lots of vendors are taking a stab at building the necessary
> # software to apply this sort of context to that data coming out of NIDS
> Where is nCircle? They should be guru at this, having probably the
> oldest model for doing this. <sigh> (Hi John F., from the old UCU days...)
They definitely had the genesis of the right idea...
> I've spent a number of years caring and feeding for corporate networks
> including HIDS, NIDS, SEMs (netForensics, Pentasafe VLA, etc.) and
> I know all about the pain and frustration and worthless value of aggregating
> all this data but being able to assign no value to it without tons of manual
> analysis. It's easier to ignore and go play the patching game...
> So we have built an IDS deployment methodology at the organization I
> work for, that the majority of work comes way before deployment or IDS
> selection. (this is old hat to most of you, so I'll skip the details).
> the primary things that need to happen are:
> 1. Asset Identification.
> 2. Asset Classification, with regards to Criticality and Sensitivity (very
> 3. Asset Valuation: create a combined asset value (CAV) metric based upon
> 3. Security Event collection (NIDS, HIDS, SEMs, etc.).
> 4. Vulnerability Posture collection (ISS, Retina, Nessus, Qualys, whatever).
> 5. Security Event correlation with Vulnerability Posture and CAV.
> 6. Security Event metric generation, which is a combination of assigning
> metrics to the security event, and factoring it against the vulnerability
> and CAV metrics of given asset(s).
> Aside from nCircle, IDS vendors are just getting around to about 50% of
> ISS has Fusion, and Sourcefire will have RNA soon. The SEM vendors have
> been "correlating" events for some time, but no one seems to have taken
> the most important approach:
> Organizations need a smart, effective, and automatic way to correlate
> events with *BOTH* the vulnerability posture and the actual value of the
> The vast majority of host and network based audit tools, vuln scanners,
> and SEMs give you slim to none ability to define a CAV and compare it to
> either vulnerability posture, or security events. And none give you both.
Automation is the key here, manual methods don't scale and the information
goes stale. In rapidly changing network environments you have to have a
system that can work without human intervention and that doesn't rely on
aggressive continuous interrogation of the network. Assigning "value" to
network elements can be as trivial as assigning arbitrary weights to
elements unless you've got a more specific taxonomy of value identifiers
that you need to use. How would we define this? Role, exposure, purpose,
Infrastructure (routing & switching separate?)
(I'm writing this on an airplane at 7AM, please excuse fuzzy thinking...)
That's one possible set of taxonomical identifiers we could use, assigning
scores or weights to each identifier and then combining them to identify the
CAV of a given network element.
> How hard would it be to let one define assets and assign metrics in the
> log aggregation database, and do some metric comparisons to put all three
> elements into perspective? Because that is what is really needed...
Once we define what we're going to call things and how we're going to define
value, it's not hard at all. I've been thinking about this problem for a
while but it's hard to come up with non-subjective terminology. I worry
about having the same problems we run into with classification and
priorities, there are so many ways to classify things (and priority is the
ultimate in contextualization of the data) that we can spin our wheels
forever if we're not careful.
> After years of doing this manually, and often failing, I *feel* the need.
> so do all of you out there still caring for and feeding your networks...
> Why didn't ISS build this function into Fusion? RDG? Maybe it's harder than
> I think. Too bad code I write looks like it came from a pseudo-random-code-
> -generator, or I'd take a stab at it myself. Marty? I know you can do this
> "this code will be faaast" :)).
Good things come to those who wait. :)
> BTW// #4, above, has to be dynamic. In mid-to-large size Enterprises, the
> network often changes faster than the security/IDS team can keep up with.
> Manually tuning NIDS in respect to specific assets' vulnerability posture
> _does_not_scale_ at all.
Actually, I predict there's going to be a religious battle (probably taking
place on this list and others like it) between people advocating passive
discovery approaches in contrast to active ones and how effective they are
in dynamic environments. Passive approaches allow for automated tuning to
take place in ways that can be specifically advantageous over active
approaches and I think that this will ultimately prove to be one of the key
differentiators of these technologies.
> # I think that the data that ends up on the "cutting room floor" after this
> # contextualization process still has value for trending purposes and
> Well, that's another important point that deserves it's own discussion.
> We need a Security Event Management (SEM) list to discuss centralized
> log collection, aggregation, reporting and forensics...
The more data we generate, the more important it will be. I wonder if there
are better ways to approach it...
> Good discussion, it's really helped me solidify my thoughts. Cheers,
-- Martin Roesch - Founder/CTO Sourcefire Inc. - (410) 290-1616 Sourcefire: Enterprise-class Intrusion detection built on Snort firstname.lastname@example.org - http://www.sourcefire.com Snort: Open Source Network IDS - http://www.snort.org --------------------------------------------------------------------------- Captus Networks - Integrated Intrusion Prevention and Traffic Shaping - Instantly Stop DoS/DDoS Attacks, Worms & Port Scans - Automatically Control P2P, IM and Spam Traffic - Ensure Reliable Performance of Mission Critical Applications Precisely Define and Implement Network Security and Performance Policies **FREE Vulnerability Assessment Toolkit - WhitePapers - Live Demo Visit us at: http://www.captusnetworks.com/ads/31.htm ---------------------------------------------------------------------------