Re: Testing IDS with tcpreplay




Ok, I've sat on the sidelines on this one as long as I could - I can't take it any longer!!! :)

While IMNSHO there is clearly no ONE correct answer to this debate, I encourage members on this list to consider a few things that I (and others at Neohapsis) have learned over the years of testing signature-based security gateway products. Maybe some of this will help...or at least, help until the next time this thread gets created (about every 2 years or so, yeah?):

1. Philosophically, as a tester I believe it is important to decide whether you want to validate that a device is detecting actual attacks vs. detecting a specific set of packets. They may *look* the same at times, but they are unequivocally NOT the same thing. Think about this one for a second before going further. (I totally agree with Ivan on this point.)

2. I am pretty confident that at least 90% of the products on the commercial market have made their way through our lab at one point or another, and I can say WITHOUT HESITATION or DOUBT that almost ALL OF THEM have botched AT LEAST one signature or two - sometimes more. As in, they thought sig x worked, we ran an actual exploit, and the device came up with a big nadda. I can also say - without hesitation - that many of the QA efforts deployed by many of the IDS/IPS/UTM/whatever-we-call-it-these-days are based on replay methods, and sometimes those packet captures (used for the replay process) are flawed themselves. Which leads me to:

3. I can also say that we've seen replay-based tools put traffic on the wire that is utter BS. As in, it's a bunch of junk that no IDS should ever flag because it isn't actually hostile traffic. But rarely do I see testers actually busting out Ethereal and their exploit code to validate that what went down on the wire is in fact, something the IDS should flag. I dislike replay in general for reasons stated here (and other reasons, too - I won't submit you to any more rambling than necessary!), but it is especially important to take into consideration that just because a replay / injection tool says it does X doesn't mean it really does X correctly. And do we really want our signature-based devices flagging things that aren't actually hostile?

4. I like real attack traffic if I didn't state this already. :)

5. Put another way, it's one thing for an engineer to say "Hey, I don't think that packet capture represents a legit attack stream" but it's kind of hard to argue a root shell prompt staring you in the face. I think I've saved DAYS of my life by preemptively ending finger pointing thanks to mr. shell; the eloquent simplicity that is a remote root shell remains a hard point to argue!

6. I think MetaSploit, CORE's product, and ImmunitySec's Canvas are good foundations for executing real attacks. NOTE THAT THESE ARE NOT VULNERABILITY SCANNERS; DON'T USE THOSE - THEIR METHODS TYPICALLY DON'T EXPLOIT ACTUAL TARGETS! (See recurring thread on this one, too..) However, without sufficient LEGIT background traffic (see Spirent's WebAvalanche for more info here) this exploit-only method of testing is limited in value because:

7. Signature recognition capabilities are often tied to specific product engine thresholds. Perhaps one of the points I see most often missed by testers is identifying / knowing the difference between engine flex testing and attack validation. Running an HTTP-based attack with no HTTP background traffic what-so-ever and validating whether or not the device detects the attack is one thing. Not real-world at all, limited in value, but it is ONE test. However, validating that same attack while you start incrementally loading on legit traffic (and in turn, stressing different parts of the product) is a lot closer to most real world conditions, and a lot more valuable of a test. There are people on this list that know IDS internals way better than I ever will, but I suspect any engine developers on this list will attest to the, er, threshold challenges. :) This is a much bigger subject, but the short version is NO BACKGROUND TRAFFIC = TEST IS LIMITED IN VALUE.

8. Driving this point one step further, using replay to put attack patterns on the wire is one thing, using it to put background traffic on the wire is another thing, but trying to test whether a product can perform accurate attack detection at sustained rates of 500Mbps? 700Mbps? 1Gbps+? Doing the math on the pcap sizes you'll need for replaying background traffic at these higher speeds drives the point home pretty fast...

9. All IDSs break. Period. Or at least, I've never tested one that couldn't be broken - but IMO, that's really not material. What is material is that us consumers understand where that breaking point is, and whether or not that breaking point is relevant to our environment. (i.e. I'm not going to sweat my IDS freaking out at 500Mbps when I have it inspecting a DS3) In short, know what your testing methods validate...and know what they DO NOT validate.

10. I like using real background traffic, and real attack injection. (Wait, did I say that already?) Ok, final point: if you don't have a well equipped lab the ideal scenario for many orgs is piloting a device using their own internal traffic, and then using controlled attack and victim boxes. (The metasploit / vmware combo is a good one, IMO) As public testers we've worked very hard to learn how to synthetically generate traffic extremely close to real world conditions, but we will rarely (if ever) be able to *exactly* replicate the real thing in the lab. (We can get a lot closer now, however!) When in doubt, pilot.

Ok - wheh - I feel better now! (hope this helps)

Greetings from Chicago,

-Greg



------------------------------------------------------------------------
Test Your IDS

Is your IDS deployed correctly?
Find out quickly and easily by testing it with real-world attacks from CORE IMPACT.
Go to http://www.securityfocus.com/sponsor/CoreSecurity_focus-ids_040708 to learn more.
------------------------------------------------------------------------



Relevant Pages

  • Re: TippingPoint Releases Open Source Code for FirstIntrusionPrev ention Test Tool, Tomahawk
    ... with everything stated) pcap replay is going to get you a limited amount ... in the attack method will result in the IDS missing the attack. ... I've watched our lab team find errors in IDS signatures while ... vendors have been claiming this for years, and every vendor on this planet ...
    (Focus-IDS)
  • A more detailed description of the Jura F90 vulnerability.
    ... The software does not validate the site it gets the information from nor does it sufficiently validate the input to the software. ... At the moment as I think there are so few people as crazy as I am who actually have to have a gadget just as it is Internet connected; this is not likely to become a widespread attack vector. ... Jura did not make the assumption that an evil attacker could purposefully modify and publish "evil" coffee "recipes. ...
    (Bugtraq)
  • Re: Looking for Credit Card numbers in all fields
    ... I would use Regular Expressions to locate candidate cc ... Once you have located a candidate, you may have to do some other checks to validate the cc ... Realizing that the cc numbers could be in any format, ... the best way to attack such a problem? ...
    (microsoft.public.fox.programmer.exchange)
  • IDS Assessment (was: Intrusion Prevention... probably something else at one point)
    ... scrutiny of all IDS features/technologies. ... Anomaly-type detection engines can ... weaknesses of each detection methodology (which is described in much ... attack d'jour with a cool sounding name and/or press ...
    (Focus-IDS)
  • Re: Target based IDS review and discussion in Information Security
    ... This all began in 2000 when Marty lead the IDS development effort at ... > describes alerts as they pop out of IDS consoles. ... > Roesch names two other components as integral to target based NIDS: ... > an attack on a system that cannot succeed should be demoted. ...
    (Focus-IDS)