back to article Auto vulnerability scanners turn up mostly false positives

Automated vulnerability scanners turn up mostly false positives, but even the wild goose chase that results can be cheaper for businesses than manual processes, according to NCC Group security engineer Clint Gibler. At the Nullcon security conference in Goa, India, Gibler said he pointed an unnamed automated scanner at 100 of …

  1. a_yank_lurker Silver badge

    One Concern

    The high false positive rate is a major concern. Too high and too many will naturally lead to the belief the system is "crying wolf" again.

    1. Andrew Commons

      Re: One Concern

      Not really. You have to take into account (a) how aggressive the scanning is, i.e. how willing the customer is to have things really broken, and (b) the context in which the scanning is taking place. If the target (customer) is sensitive then expect high false positive rates. If you are scanning from the outside network edge then internal controls that will mitigate an apparent vulnerability in a multi-stage attack may not be apparent.

      It's not black and white.

    2. Ole Juul

      Re: One Concern

      My problem is that I'm fixated on nostly.

      1. Andrew Commons

        Re: One Concern

        It's on the El Reg sliding scale between Mostly and Never....

      2. Andrew Commons

        Re: One Concern

        And it has now been corrected without any comments making these comments notally inkomprehensibule.

  2. redpawn Silver badge

    Nostly false positives

    Hide! Don't look. Those are the absolute worst kind!

    1. redpawn Silver badge

      Re: Nostly false positives

      Edit: Headline fixed. Thought we might have a great new word to kick around.

      nostly <= mostly;

  3. Fred Flintstone Gold badge

    You need both..

    I can't see the binary choice there, sorry.

    You use an automated scanner because it's MUCH faster than a human going through established vulnerabilities, and then you use a human to interpret the result. A vulnerability scanner is a tool, but it's output requires interpretation in the same way that non-medical staff can look at an EKG and probably work out that the patient is still alive but it takes a specialist to distinguish anomalies from normal variations.

    You use a human for 2 reasons: 1 - to identify issues and 2 - to discard even CORRECT positives if they represent no actual actionable risk. That's what you pay someone for, but that's also why you license scanners such as Nessus: you don't want that expensive person wasting his or her time on doing what is in essence script kiddie work.

    Maybe I haven't had enough coffee yet, but I fail to see the insight or news here. High false positives? Well, tune the tool or flame the supplier, but you need AND the humans AND the tech.

    1. Anonymous Coward
      Anonymous Coward

      Re: You need both..

      This assumes that your management isn't going to demand that all these vulnerabilities are going to be fixed, since they don't understand the difference between a false positive and true positive.

      1. Anonymous Coward
        Anonymous Coward

        Re: You need both..

        This assumes that your management isn't going to demand that all these vulnerabilities are going to be fixed, since they don't understand the difference between a false positive and true positive.

        OTOH, I'd be quite happy if I got the budget for all of them, as it would give me some margin to do what is necessary rather than the decorative nonsense we normally have to do to make it appear we do enough (mainly to offset any liability).

  4. Anonymous Coward
    Anonymous Coward

    False Positives

    An absolute nightmare when the company that owns you states "YOU MUST FIX THESE"

    Get consultant in who has the same conclusion but still told "YOU MUST STILL FIX THESE"

    Anon because reasons

  5. planetlevel
    FAIL

    Don't forget False Negatives

    False positives are an annoyance - they take a massive amount of time and expertise. But missing real vulnerabilities (false negatives) creates risk and engenders a dangerous false sense of security.

    Many organizations (and services) act as though false negatives just don't exist. They've set up a process that involves running a scanner and then having humans filter out the false positives. This totally ignores the fact that scanners, particularly SAST and DAST application security scanners, have extremely high false negative rates.

    I'm not crazy about the math in the article. You can adjust the salary and flaws per minute however you want, but with scanners, you're going to burn your whole security budget dealing with false positives. Most organizations are resource limited, so every false positive prevents you from finding and fixing real vulnerabilities.

    Check out the OWASP Benchmark Project if you'd like to test your own tools to see what they are good at, and what they aren't. The results absolutely confirm the rates of false positives and false negatives mentioned in the article.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019