Re: [fw-wiz] Handling large log files

Like others have mentioned in previous replies, we've used syslog-ng and
Splunk to manage firewall and switch event logs. But sometimes we've
wanted to detect behaviour or anomalies that can't be done easily with
the tools. For these, I've used SEC (Simple Event Correlation), and perl
script from:

During the replacement of our campus network when lots of inter-switch
dependency issues arose, we used it to alert us to switches reporting an
error that hadn't had any problems for the past 5 days, usually
indicating something had happened externally to affect it, or to events
that were new in the past 5 days. We also used it to identify things
like links bouncing (down/up/down within a certain period of time). The
output of SEC was fed back in to syslog-ng as and represented in Splunk
as "synthetic" events, for which we had special notification and

The goal of the process was to do exception reporting, allowing us to
collect all the events but only be notified when certain criteria

-----Original Message-----
From: firewall-wizards-bounces@xxxxxxxxxxxxxxxxxxxxxxx
[mailto:firewall-wizards-bounces@xxxxxxxxxxxxxxxxxxxxxxx] On Behalf Of
Nate Hausrath
Sent: Tuesday, May 05, 2009 6:41 PM
To: firewall-wizards@xxxxxxxxxxxxxxxxxxxxxxx
Subject: [fw-wiz] Handling large log files

Hello everyone,

I have a central log server set up in our environment that would receive
around 200-300 MB of messages per day from various devices (switches,
routers, firewalls, etc). With this volume, logcheck was able to
effectively parse the files and send out a nice email. Now, however,
the volume has increased to around 3-5 GB per day and will continue
growing as we add more systems. Unfortunately, the old logcheck
solution now spends hours trying to parse the logs, and even if it
finishes, it will generate an email that is too big to send.

I'm somewhat new to log management, and I've done quite a bit of
googling for solutions. However, my problem is that I just don't have
enough experience to know what I need. Should I try to work with
logcheck/logsentry in hopes that I can improve its efficiency more?
Should I use filters on syslog-ng to cut out some of the messages I
don't want to see as they reach the box?

I have also thought that it would be useful to cut out all the duplicate
messages and just simply report on the number of times per day I see
each message. After this, it seems likely that logcheck would be able
to effectively parse through the remaining logs and report the items
that I need to see (as well as new messages that could be interesting).

Are there other solutions that would be better suited to log volumes
like this? Should I look at commercial products?

Any comments/criticisms/suggestions would be greatly appreciated!
Please let me know if I need to provide more information. Again, my
lack of experience in this area causes me hesitant to make a solid
decision without asking for some guidance first. I don't want to spend
a lot of time going in one direction, only to find that I was completely

firewall-wizards mailing list

firewall-wizards mailing list