RE: IPS, alternative solutions
From: Stuart Staniford (stuart_at_nevisnetworks.com)
To: "'Jason'" <email@example.com>, "'Kyle Maxwell'" <firstname.lastname@example.org> Date: Mon, 27 Sep 2004 22:09:47 -0700
> Worms are now capable of infecting the global vulnerable population in
> 15 minutes. Will you bet a penny that any IPS will protect you at the
> onset of an attack? Two days into it? Which detection method will it
> use? Will the worm use that same method? What will be the false positive
> rate for that method? A signature of FF FE 00 00 is sure to have a high
> false positive rate.
> This is why I do not think there is a
> measurable ROI
> when compared to directing those same resources at better approaches.
> The only recourse you have here is patching, praying, and utilizing a
> good Intrusion monitoring system to detect the signs of an attack.
I would like to offer some food for thought here.
Firstly, it is hard to make precise ROI calculations about worms. We do not
know, but worm outbreaks are probably like earthquakes and forest fires in
expressing self-organized criticality and thus having heavy tailed
distributions: arbitrarily large amounts of damage are occasionally
possible, and it's hard to estimate the likely frequency because most of the
damage will come from the worst cases, but they'll be infrequent enough that
it's not easy to build statistics on them.
Secondly, you seem to believe that IPS's are all reactive signature based
mechanisms that cover the same space as patching covers. This isn't true -
IPS's have advantages that patching doesn't, and they lack drawbacks that
patching has. They cover an overlapping but distinct subset of the problem.
As to the biggest drawbacks of patching: it covers only known
vulnerabilities for which patches are available, and it takes longer to do
than a worm takes to spread. Obviously, it's generally good to patch the
known vulnerabilities, but it's not a panacea. As has been widely noticed
(Bruce Schneier has written particularly cogently and clearly on it), there
are lots of reasons why organizations need to move carefully on patching
production systems, since patches are frequently known to break things that
previously worked. Worms can spread in minutes (less actually), and so
there's no way to be confident patching can be done fast enough.
I would also point out that there's a danger in patching. The population of
black hats has always been smart, and appears to be getting more
professional and financially motivated over time. In the past, they were
quite capable of developing and circulating their own zero-day exploits
before those vulnerabilities were discovered by white-hats, and I'm sure
they can in the future too. I think the only reason they don't do it as
much at the moment is that its less work to use the ones that have been
helpfully publicized, or to reverse engineer the patches. If patch
management becomes truly fast and widespread, they will adapt by developing
their own zero-day vulnerabilities. Recent research tentatively suggests
that the vulnerabilities found by the public-disclosure folks are only the
tip of the iceberg. See Erik Rescorla's paper at WEIS this year on "Is
finding security holes a good idea?", available from:
So if the attackers do get driven into releasing worms with zero-day
vulnerabilities, then patch management is really a headache for the
following reason: it greatly reduces the diversity of software on the
network. Thus when there *is* a vulnerability, 100% of installed systems
will have it. Since the central factor in the speed and extent of worm
spread (at least for scanning worms), is the vulnerability density (the
fraction of enterprise addresses that are vulnerable to the worm), this
makes the problem worse. It's also a central variable that controls the
effectiveness of systems deployed to contain worms.
It's a little akin to firefighting when a century of suppressing all the
little fires results in a big forest-floor fuel load and lots of scrawny
overcrowded trees, and then when there *is* a fire, it's a doozy.
Well implemented, IPS systems can do a lot to block worm spread even of
worms that use a completely unknown vulnerability. This is true even today,
but its also an area of intense R&D both in industry and academia. IPS's
that block scans can almost completely prevent the spread of current
scanning worms in enterprises *if* they are properly deployed. A worm, to
spread, needs a worm instance to find on average more than 1.0 other
vulnerable but uninfected systems to infect. This is called the epidemic
threshold. Less than this, and the worm will peter out, more and the worm
will spread exponentially. Now, observed vulnerability densities on
networks are quite low for worms to date (typically less than 1%). If the
vulnerability density is 1%, a scanning worm needs to scan 100 systems to
find one vulnerable one. That's well within the sensitivity of even lousy
portscan detectors. The best portscan detectors can detect a scan after
more like 5 attempts. However, to work well, the detection has to be
deployed widely enough, or route enough address space that the worm doesn't
know isn't used to the detectors, to detect early enough (and then blocking
response must be correspondingly fast). There is a good deal of detail to
get right here.
In my view we are rapidly seeing convergence of IDS/IPS and firewalls.
You're increasingly getting all this functionality built into the ASICs of
current security appliances. Indeed several of the IPS companies began life
by pitching themselves as a combination of an IDS and a firewall. However,
if for the sake of discussion we view a firewall as a purely static
mechanism (rather than involving dynamic detection and response), then
firewalls are inadequate as internal barriers precisely because they are
static. A worm gets to try a gazillion times against the firewall, and so
any vulnerable system that it can see (and there ought to be a few if the
organization is going to get any work done), are vulnerable. Sure, you can
prevent server-server and client-client communication, but that just means
the worm needs two vulnerabilities, and there have been plenty of such pairs
to use of late. Typically, people only drive this kind of firewalling so
far into their network, because it's a hassle to configure (ultimately, it's
an O(NM) problem (where N is the # of clients and M the number of servers)
to specify who should talk to what, and that's a lot of configuration in a
Thus internal firewalling is much better augmented by dynamic detection
mechanisms (and indeed firewall vendors are adding this).
Not to say we are at the end of the road in containing worms. There's all
kinds of fun tricks the worm writers will play once protection against the
really dumb worms is reliable. And the security industry will figure out
defenses to those, and so it will go back and forth, which is what makes the
problem fun to work on.
Stuart Staniford, Principal Scientist
Test Your IDS
Is your IDS deployed correctly?
Find out quickly and easily by testing it with real-world attacks from CORE IMPACT.
Go to http://www.securityfocus.com/sponsor/CoreSecurity_focus-ids_040708 to learn more.