The high false positive rate is a major concern. Too high and too many will naturally lead to the belief the system is "crying wolf" again.
Automated vulnerability scanners turn up mostly false positives, but even the wild goose chase that results can be cheaper for businesses than manual processes, according to NCC Group security engineer Clint Gibler. At the Nullcon security conference in Goa, India, Gibler said he pointed an unnamed automated scanner at 100 of …
Not really. You have to take into account (a) how aggressive the scanning is, i.e. how willing the customer is to have things really broken, and (b) the context in which the scanning is taking place. If the target (customer) is sensitive then expect high false positive rates. If you are scanning from the outside network edge then internal controls that will mitigate an apparent vulnerability in a multi-stage attack may not be apparent.
It's not black and white.
I can't see the binary choice there, sorry.
You use an automated scanner because it's MUCH faster than a human going through established vulnerabilities, and then you use a human to interpret the result. A vulnerability scanner is a tool, but it's output requires interpretation in the same way that non-medical staff can look at an EKG and probably work out that the patient is still alive but it takes a specialist to distinguish anomalies from normal variations.
You use a human for 2 reasons: 1 - to identify issues and 2 - to discard even CORRECT positives if they represent no actual actionable risk. That's what you pay someone for, but that's also why you license scanners such as Nessus: you don't want that expensive person wasting his or her time on doing what is in essence script kiddie work.
Maybe I haven't had enough coffee yet, but I fail to see the insight or news here. High false positives? Well, tune the tool or flame the supplier, but you need AND the humans AND the tech.
This assumes that your management isn't going to demand that all these vulnerabilities are going to be fixed, since they don't understand the difference between a false positive and true positive.
OTOH, I'd be quite happy if I got the budget for all of them, as it would give me some margin to do what is necessary rather than the decorative nonsense we normally have to do to make it appear we do enough (mainly to offset any liability).
False positives are an annoyance - they take a massive amount of time and expertise. But missing real vulnerabilities (false negatives) creates risk and engenders a dangerous false sense of security.
Many organizations (and services) act as though false negatives just don't exist. They've set up a process that involves running a scanner and then having humans filter out the false positives. This totally ignores the fact that scanners, particularly SAST and DAST application security scanners, have extremely high false negative rates.
I'm not crazy about the math in the article. You can adjust the salary and flaws per minute however you want, but with scanners, you're going to burn your whole security budget dealing with false positives. Most organizations are resource limited, so every false positive prevents you from finding and fixing real vulnerabilities.
Check out the OWASP Benchmark Project if you'd like to test your own tools to see what they are good at, and what they aren't. The results absolutely confirm the rates of false positives and false negatives mentioned in the article.
Biting the hand that feeds IT © 1998–2019