RE: IDS Correlation

From: Bill Royds (lists@royds.net)
Date: 04/02/02


From: "Bill Royds" <lists@royds.net>
To: "Stephen P. Berry" <spb@meshuggeneh.net>, "Marcus J. Ranum" <mjr@nfr.com>
Date: Mon, 1 Apr 2002 21:19:38 -0500

Why don't we agree on a binary packet dump format like TCPDump for example as a way of sending actual packets to each other, instead of trying to create a wondrous new ASCII format.
TCPDump has the advantage that most IDS can handle it, there is an open source implementation, it is binary, so it is reasonably compact.
  The need, then is a meta data language that describes the circumstances where the dump data was gathered, what were the filters, what were the triggers etc.
  This is what could be sent using XML with the raw data as an embedded object, similar to the way pictures are sent in ordinary HTML.

-----Original Message-----
From: Stephen P. Berry [mailto:spb@meshuggeneh.net]
Sent: Mon April 01 2002 16:38
To: Marcus J. Ranum
Cc: Matthew F. Caldwell; Jared A. Tucker; eddonega@WellsFargo.COM; Keith
T. Morgan; xwu@anr.mcnc.org; focus-ids@securityfocus.com;
spb@meshuggeneh.net
Subject: Re: IDS Correlation

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus J. Ranum writes (of XML):

>Just like everything, it can be overengineered.

Non-overengineered XML is about as common at content-rich VRML.

>The concepts
>aren't awful, though. The stuff I've been doing with fargo
>uses a subset of XML - which should work through an XML parser
>but that only uses a minimum of tags, etc. The reality of them
>matter is that you're going to need some kind of record delineation,
>whether it's commas, newlines, attr=value, or whatever.
>Otherwise you've got to crush everything into text and then have
>a de-parser on the other side. Considered otherwise, that's
>basically the same thing as doing a compression/decompression
>process (only harder to implement!) on the data. In other words,
>I don't think you can win this fight and the correct tool(s) to
>make the problem go away are found in compression algorithms,
>not in simpler (or more complex!) markup schemes.

Employing XML seems to be a common way for an application developer to
admit to not understanding the data model. In -theory- there's nothing
wrong with XML per se, but I'm wary of it as a standard for exchanging
data from any nontrivial problem space.

It's great if the kind of information you're trying to convey involves a
lot of discrete objects, and you want to talk about them by listing
everything you know about each of those objects. It's lousy for trying
to convey -process- or representing moderately complex relationships between
the individual data. The fact XML is seen as adequate for something like
a standard for representing generic intrusions indicates that we (as a
whole) are still hung up on an incident model that consists of events on
the scale of single packets.

There certainly is a place for this sort of model---but I think it's getting
increasingly inadequate as a sort of implied default way of thinking about
or discussing incidents. And, as you observe, if you -are- looking at
things on this scale, you probably want the raw data itself, and not some
abstracted version of it stuffed between twice its own weight in tags and
delimeters. In fact, the only cases where I can really imagine wanting
the data in some tokenised, decimal-number format is when I'm looking
at the output/summary/report/alert or possibly if I'm doing a bunch of
repetative analysis in a database or some such.

- -Steve

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.3 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD4DBQE8qNNAG3kIaxeRZl8RAsJjAJi/lHH4xZNV8bFE+Zq8+u/vePKQAJ9esyEj
RZ133b2tgNXBpSD2WPsdyQ==
=Wm0v
-----END PGP SIGNATURE-----



Relevant Pages

  • Re: Sane Syntax
    ... vital role in the future of TeX but we need some more human friendly ... Generating well formed LaTeX2e documents from XML ... Another approach is to convert existing documents to XML format and go ... TEI, together with DocBook, are the two ...
    (comp.text.tex)
  • Re: XHTML vs HTML
    ... to be the predominant type of HTML used on the web for many years yet. ... First, it is XML. ... XHTML is also ... transformed using XSL from and into virtually *any* other data format. ...
    (microsoft.public.frontpage.programming)
  • Re: text to bibliography?
    ... to xml: you can store binary data in an xml file. ... including your well-formattedbibliography(no longer in xml format). ... It is in annotated bibliographies (something Word 2007 does not ... that %I is actually the field representing the publisher. ...
    (microsoft.public.word.docmanagement)
  • Re: Future of LISP. Alternative to XML. Web 3.0?
    ... using s-expressions instead of XML, nobody is going to use it, ... because it's cheaper to keep the existing XML software and continue ... XML-MAIDEN format or to HTML format and next displayed via standard ... or a CanonML or LISP browser. ...
    (comp.lang.lisp)
  • Re: Data table text I/O package?
    ... It has bracketing: rows and columns. ... As a medium XML is as awful as readable. ... > tell you not to use an internal type when it suits your application. ... is irrelevant to the data format used. ...
    (comp.lang.ada)