Re: [Full-Disclosure] Response to comments on Security and Obscurity

From: James Tucker (jftucker_at_gmail.com)
Date: 09/02/04

  • Next message: James Tucker: "Re: Re[4]: [Full-Disclosure] Response to comments on Security and Obscurity"
    To: Barry Fitzgerald <bkfsec@sdf.lonestar.org>
    Date: Thu, 2 Sep 2004 11:42:44 +0100
    
    

    On Wed, 01 Sep 2004 17:06:45 -0400, Barry Fitzgerald
    <bkfsec@sdf.lonestar.org> wrote:
    > You're right with this scenario, of course, but I don't think that they
    > meant that there was no room for physical protection in information
    > security.

    My point was intended to make people realise that where your security
    holes are in the universe, it does not matter. You only achieve
    security if you close all of them. Analogies help the lesser educated
    understand complex paradigms; even if they are not accurate they still
    contain value in at least they provoke thought in the correct
    direction. A pattern of thinking up ideas and ways of entering a
    system can be built up by someone non-technically minded if they
    approach it with the correct mindset. It is this mindset which is of
    far more value than any knowledge that any of us have. The industry
    moves sufficiently fast that knowledge is no substitute for thought.

    > I think they meant that you can't make a physical comparison to an
    > information security structure. One can't actually, as an example,
    > compare a firewall to a constantly burning facade.

    Quite true, a firewall is more like the kids toys with the different
    shaped holes and pieces, and you can only get a specific piece (if it
    is well formed) through a specific hole (port if you like). This
    analogy also sucks, as it only partially covers concepts of deep
    packet inspection and content filtering (which are starting to become
    more common). If you want to teach someone the accurate and full truth
    then sure, throw away analogies, but you will have to be prepared to
    teach them many years of experience; from basic IT to modern day
    security.
     
    > Take a military base, for example. One can, if they were so inclined,
    > use the military base as an example of a well secured area. You've got
    > gates, gun emplacements, armed guards, many locked doors, cameras at the
    > gates, razorwire, etc. Military gates are presumably well secured, right?

    I would rather stay away from commenting about military security, for
    a start military groups all over the world are renowned for
    sophisticated use of deception techniques to increase physical and
    virtual security.

    This analogy is as good as any other, so I am not going to bash it,
    the value that comes from this analogy you can think of things like
    this (not an extensive list, i'm not writing a paper; is it even
    possible to make an extensive list and publish it before finding a new
    idea for a hole?):
    The locked doors may compare to authentication, picking the lock would
    then be comparable to exploiting a hole. Brute force attacks compare
    to trying every key on the guards key chain that you stole.
    Armed guards, heh, HP's new worm. lol enough said.
    Cameras at the gates - your IDS.
    Razorwire, well some protocols will d/c you for messing around, but
    not that many.

     
    > Well, you can try to make an analogy between this and a well-secured
    > network. The problem is that the analogies don't align. A firewall
    > isn't really like a gate with an armed guard at it. Your soldiers can't
    > be turned into unwitting zombies by IE exploits. An IDS isn't really
    > like a camera. System passwords aren't actually like locked doors.
    Your right, but included with the analogy therefore should be an
    explanation of the differences, for example:
    - A firewall has limited intelligence, a firewall can have holes
    (weaknesses). This could only be compared to an example of something
    like the guard cant see a person who is exactly 89kg, 142cm tall, with
    black hair, green eyes, sideburns, wearing a straight jacket.
    Ridiculously specific and unrealistic in the physical world, but this
    is the complexity level we see in the digital world.
    - Agreed as far as i know humans cannot turn into zombies. Humans can
    be deceived into performing incorrect and damaging actions though.
    - An IDS is quite allot like a camera, this may well be the best part
    of the analogy. An IDS only sees what it is looking at, as does a
    camera. Read: its good, but not failsafe. An IDS is not a counter
    attack mechanism, nor is a camera.
    - System passwords and people trying keys in doors are different
    mostly in terms of speed, well an uneducated reader should be made to
    understand that the rates are much different in a computer system. My
    home computer can attack NTLM hashes at a keyrate of ~4M keys per
    second, and an audit of the full ASCII character set on passwords took
    about 20 days. Removing large portions of the character set (which
    rarely if ever get used, even in security circles), this can be
    reduced to a matter of hours. If I was hacking and i wanted passwords
    fast, a zombie army would result in a matter of minutes, not hours or
    days. Frankly with modern day machines, if someone stole your hash,
    passwords suck.

    > The analogy can loosely be used to illustrate a point, but anything
    > beyond very loose interpretation is virtually worthless because of its
    > inaccuracy.
    Maybe, but you have to educate people somehow, and you don't have time
    to explain everything.

    _______________________________________________
    Full-Disclosure - We believe in it.
    Charter: http://lists.netsys.com/full-disclosure-charter.html


  • Next message: James Tucker: "Re: Re[4]: [Full-Disclosure] Response to comments on Security and Obscurity"