RE: Rooting out false positives

From: Omar Herrera (oherrera_at_prodigy.net.mx)
Date: 07/19/05

  • Next message: ew16301_at_usachoice.net: "Formal Security proposal"
    Date: Mon, 18 Jul 2005 20:25:35 -0500
    To: pen-test@securityfocus.com
    
    

    > -----Original Message-----
    > From: Erin Carroll
    >
    > I recently rejected the below submission to the list as it was more
    > appropriate for Tenable's nessus list rather than pen-test but I wanted to
    > submit it with an addendum to bring up a topic which I would love to see
    > discussed: How do list members deal with rooting out false positives? When
    > do you have "enough" feedback in pen-testing a possible vunerability
    > before putting something in the false positive column?
    >
    > 5 years ago certain vulnerabilities would have been beyond my skill level
    > at
    > the time to assess and verify correctly. I'm sure there are things now
    > that
    > fall into that area as well. What methods do you guys use to minimize that
    > situation from occuring?

    This is, in my opinion, one of the things that should distinguish a good
    pentester. Unfortunately, this is not an exact science and customers hardly
    recognize it.

    My approach resembles a spiral (you could also use decision trees to
    implement it), is just a progressive method to refine the results and goes
    more or less like this:

    Phase 0) Set the base for further analysis, this means to correctly identify
    open ports, operating system and application brands (even versions if
    possible). I flag all conflicting reports as potential false positives (e.g.
    ttl not matching other O.S. identification results, banners conflicting with
    other application ID signatures...).

    Phase 1) Once (and if) phase 0 is successful and I have already enough
    confidence of the type of O.S. and application brands and versions I try to
    pickup obvious false positives, e.g. MS IIS vulnerabilities showing up in a
    Solaris server. This last example is an exaggeration but illustrates the
    point; many times the port scan phase of the Pentest will yield enough
    information to correctly identify the server so that the vulnerability
    scanner will be configured accordingly so as to only scan for relevant
    vulnerabilities for this server, therefore avoiding false positives like
    this one. The exception are some proxied well filtered services, but even in
    this case you still might be able to manually identify the brand and version
    of the service looking at some pattern that are inherent to the application
    (or the client might just as well give you this information, depending on
    the kind of Pentest).

    Phase 2) We have our vulnerability scans at this point, looking good at
    simple sight, It is now time to see which of those vulnerabilities that
    match our O.S. and applications are really there. Since this is a pentest,
    phase 2 really is part of the exploitation phase. I therefore change the
    approach here: instead of trying to figure out which results are clear false
    positives, I just try to prove which ones are not, starting with the most
    critical ones of course (as time and scope permits). Note that not being
    able to exploit certain vulnerability doesn't mean it is a false positive,
    but if you can exploit it you certainly can discard it as a false positive.
    Even the most experienced pentesters have limits in their abilities and
    knowledge, and there are many factors (as you point out) that could prevent
    successful exploitation under the time frame allowed by the contract.

    Phase 3) The most rigorous pentesters might try to go as far as checking the
    signature script for each remaining vulnerability reported. In my experience
    however, this is not practical, not even bullet-proof and prohibitively time
    consuming (besides, you can't act as the QA department for the vulnerability
    scanning tool all the time); in short, not cost-effective. Phase 3 in my
    case means: "Time to have a meeting with the technical counterpart of the
    client, to show and verify preliminary results". So I would report as
    confirmed only those findings that I'm confident (backed by evidence) and as
    "potential" any other findings. There is nothing wrong in recognizing that
    you are not able to fully demonstrate each an every finding, pentesters just
    recognize here that due to limited visibility, scope and time, this is as
    far as they could get. Now, the client can really help out here to identify
    some remaining false positives. This can be as easy as the client giving you
    proof that a certain patch has been applied (or recognizing that it has not
    been applied) for a given vulnerability so that you can add this information
    to your report.

    So that's it; phase 3 worked very well for me, since this increases
    interaction and reduces frictions with the people potentially responsible
    for the vulnerabilities and their solution. If the give you their
    observations signed, you protect your work and reputation by confirming or
    denying false positives, using evidence provided from third parties that
    arguably have more accurate and complete information of the systems being
    tested.

    There will always be some situations where you won't be able to apply this
    methodology directly, due to scope or legal restrictions in some cases, but
    I believe it is a good starting point.

    Now the commercial:
    At OISSG (www.oissg.org), specifically referring to the ISSAF project, we
    are trying to include some of these practices that solve problems of this
    kind in pentesting and security audits. We take advantage of the experience
    of many contributors.

    Take a look at the latest draft (version 0.2 should be out soon). Projects
    like ISSAF are open for all of those who wish to collaborate, and we also
    take very seriously all opinions and shared experiences exposed in lists
    like this one.

    Kind regards,

    Omar Herrera


  • Next message: ew16301_at_usachoice.net: "Formal Security proposal"

    Relevant Pages

    • Re: Announcement: Alert Verification for Snort
      ... Extending the categorizations (mentioned by Marty in his email and Ron ... vulnerability knowledge modifies the outcome from the IDS itself). ... > scanner can have false positives and false negatives, ...
      (Focus-IDS)
    • Vulnerability Assessment vs. PenTest
      ... Where is the line between a Vulnerability Assessment and a PenTest? ... Download FREE whitepaper on how a managed service can ...
      (Pen-Test)
    • Re: Qualys
      ... IIS server. ... They take false positives ... Audit your website security with Acunetix Web Vulnerability Scanner: ... Cross site scripting and other web attacks before hackers do! ...
      (Pen-Test)
    • Re: Product Review - CORE Impact (I said something wrong about Nessus)
      ... the presence of a vulnerability and not a very accurate one. ... testing' although not necessarily exploiting it to gain access to the host. ... of the complexity involved in just *checking* for a vulnerability. ... vulnerability info, reduces false positives with the click of a button, anddistributes this information to hundreds of users. ...
      (Pen-Test)
    • Re: Rooting out false positives
      ... application level firewall. ... As to how to flag false positives, here's my 2c (which is quite ... possible false positive and try to approach the vulnerability ... of databases and would be suspicious if the database list where empty. ...
      (Pen-Test)