Re: Tcpdump hardware requirements for a 100mb line tap

From: Bennett Todd (
Date: 06/12/02

Date: Wed, 12 Jun 2002 12:34:11 -0400
From: Bennett Todd <>
To: Justin Funke <>

2002-06-09-17:12:03 Justin Funke:
> I am going to deploy tcpdump on a full duplex LAN tap capturing the full
> packets (-s 1500)
> Does anyone know what the processor/ram requirements would be to just
> write these packets to a file.

Processor/RAM requirements will vary dramatically depending on
details of OS choice, version, interface hardware, and so forth.

I am not sure, but I strongly suspect that if you had a fast (by
today's standards) CPU --- say 1.5GHz or better --- plenty of RAM (I
can't see why more than say 128MB would be needed or even useful),
and a good well-supported controller, you should be fine; and that's
close enough to the bottom end these days that there probably isn't
much reason to worry about details unless you're hoping to assemble
your sniffer from scrap.

But if this link is expected to spend any length of time saturated,
you want to arrange for a disk I/O subsystem that can sustain
10MB/s+ for long stretches without unduly burdening the system. If
cost is no object, a conservative approach might be to get a nice
high-performance RAID controller, configure it for 0+1 (stripe of
mirrors), the fastest RAID config, with very fast SCSI drives.

If however cost is an object, I'd experiment and see if it can write
the log data to a raw partition --- give it the partition name as
the file to write to. If it can, then the cheap way to do the deed
would be to get a collection of modern quick IDE drives, each with
its own dedicated controller, have tcpdump log to one drive at a
time, rotating, then schedule a low-priority task to migrate the
bits off the drives. As long as your link is not so saturated that
the logger is kept sweating all the time, you should be able to pull
the bits back out.

Worst comes to worst you could go with a hot-swappable medium, have
your logger ping-pong between two drives, and sneakernet the bits to
your archival/analysis platform. But I hate having to physically do
something to keep a process chugging along.

Then you've got to plan on what to do with this data; 10MB/s is
(unless I've gone and confused myself) 86GB/day, that stacks up
quick. Of course your long-term utilization should be one or two
orders of magnitude down from that, but this is still a pile o'
bits to grovel over. I think if I owned the job of doing something
useful with this sort of data, I'd be inclined to offload the bits
to a NetApp or something like that, compress the heck out of 'em,
and deploy a rackfull of data thagomizers to do the chewing.