When talking about vulnerabilities, the Cisco PSIRT has probably seen it all. Vulnerabilities that can be exploited over the network, vulnerabilities that need local access, and vulnerabilities that need physical access. Vulnerabilities that affect integrity, confidentiality, and availability. Vulnerabilities at the operating system level, at the application level, or at the protocol level. Hands down, the most time consuming and complex to handle are those involving a protocol -- we need to investigate each and every Cisco product that implements the affected protocol. And if the vulnerability is in, say, IPv4… the investigation will require significant time and resources.
But there is one kind of report that makes the heart of any PSIRT Incident Manager sink -- an email from a customer asking “How do I fix these vulnerabilities?”. And attached to the email -- a report from a vulnerability scanner.
Now, we know that many of our customers consider vulnerability scanning software a key part of their toolset, specially when dealing with HIPAA, SOX, or PCI compliance. There are multiple vulnerability scanner vendors out there (well, even Cisco used to sell a vulnerability scanner -- good old NetSonar). So we kind of consider them a fact of life -- something we have to deal with. But we would like to share a bit of insight and some recommendations, based on the collective experience of the Cisco PSIRT on handling vulnerability scanner reports. I would like to think that the following tips would help everyone involved -- companies behind vulnerability scanners, customers running the tool, and vendors analyzing a vulnerability scanner report report.
Tips for companies developing and selling vulnerability scanners:
1) “This is broken. Fix it”
You have to clearly state the problem, providing enough detail to allow (a) someone running the tool to understand the issue, and (b) the vendor behind the product being scanned to reproduce the issue even if the vendor doesn’t own a copy of your product. Saying “Host X is vulnerable to an XSS attack” just doesn’t cut it. Please provide details on what you sent and what you got back.
2) “I’m going to call this one. . . VBV-2011-ABCD-11345″
Within your product, feel free to assign any name you want to a test or vulnerability. But when available, you should also provide the CVE ID(s) for any vulnerability being reported. This allows your customers and us to correlate your tool findings with other vulnerability scanner reports and other vulnerability management tools . When lacking exact details (see (1)), we are prevented from knowing what exactly we’re talking about. If you provide the relevant CVE ID(s), that becomes easy. Using only your own tool ID -- almost impossible.
3) “Risk rating for this vulnerability is WeAreDoomed+”
Similar to (2) -- we know not everybody uses CVSS for scoring a vulnerability -- but when available, please provide the CVSS score. A CVSS score allows an even playing ground, and takes the guesswork out of “is a rating of VERY HIGH a lot worse than HIGH” ?. Not to mention it allows customers to calculate their own Environmental score -- again, a vulnerability with a CVSS Base score of 10 may have a way lower rating for a specific customer, depending on their environment. Related to this: keep in mind that the person tasked with running the scanner, or analyzing the results, may not have the appropriate technical background (or you may not be providing enough information -- see (1)) to perform their own assessment of the actual risk of anything being reported.
4) “All animals are equal but some animals are more equal than others”
Lucky for us, we happen to own a copy of your product. So we proceed to launch a scan against our product -- but the results we get are completely different from the ones the customer got. Countless hours have been spent trying to match the exact scanner configuration from the customer to what we are using. Please provide an option to export/import a scanner configuration file -- that way, we’re sure we’re using their exact same settings on our test bed.
5) “I don’t play nice with other kids”
The customer who approaches us with a scanner report is our mutual customer -- we all want to do what’s best for them. That includes helping us reproduce the results from your product, when run against our product. It’s the only way to keep our mutual customers happy.
True story: a couple years ago, I contacted a security scanner vendor, trying to understand why they were reporting a given vulnerability against one of our products. Try as we might, we were completely unable to reproduce in-house and we didn’t own a license to their product. Their answer was along the lines of “buy an Enterprise Plus site license, plus the Diamonds set on Platinum support agreement -- both combined are only 2.5 million dollars per month -- and then call our support line”. Again: be nice. We should both have the satisfaction of our common customers in mind.
6) “The more the merrier”
We recently got a report from a certain scanner after being launched against product X. On the report, we found “Host X is vulnerable to a Type-1 SQL injection attack”, followed by “Host X is vulnerable to a Type-2 SQL injection attack”, and so on until we got to “Host X is vulnerable to a Type-20 SQL injection attack”. And that’s about it -- no details whatsoever. How are we supposed to know what your tool considers a “Type-1 SQL injection attack” ? What is the difference between a Type-1 and a Type-20? See (1) -- and also, there’s really no need to report 20 vulnerabilities. It’s ONE vulnerability, with twenty different variations. We promise we will fix them all.
7) “Fruit salad” results
So your product may report “Host X seems to be running Cisco IOS release 45.67(3)L”. And a couple lines later, “Host X is vulnerable to the Microsoft IIS BadRequest DoS attack”. And a couple lines later “Host X is vulnerable to the Oracle’s Solaris telnet privilege escalation vulnerability”. Does this seem right? Either the host is running Cisco IOS -- and hence can’t be running Microsoft IIS (and certainly isn’t running Oracle’s Solaris) or it is a Windows host, or it is a Solaris host. Yes, we know OS detection is far from being perfect. But you have to have some kind of internal logic here, saying “hey, this cannot be right -- guess I’m going to have to tag those results as UNVERIFIED. Or assign some kind of “confidence rating” to those results -- so customers know there are good chances those are false positives.
8) The source of too much evil -- banner grabbing and version strings
Report: “Host X is running OpenSOMETHING version A.BC -- which is vulnerable to <insert a very long list of vulnerabilities here>”. This is a no-no for multiple reasons -- some of them being:
- the vulnerability may depend on a compile-time option -- and this specific implementation may not have been compiled with said option
- the vulnerability may depend on an option being set on a configuration file -- and this specific implementation may not have said option enabled (ie: Kerberos/AFS support or PAM support for OpenSSH, ECDH on OpenSSL)
- the vulnerability may indeed apply to that version, without any other additional requirements -- but the binary could have been patched from source, without changing the version string (there are multiple reasons for doing this)
Bottom line: if you didn’t actually verify the vulnerability, please tag the results accordingly -- again, the UNVERIFIED tag.
9) The “I don’t really know, but better report it” approach
Similar to (8) -- let’s use the Apache HTTP server as an example. The product is running the Apache HTTP server with ServerTokens set to Prod -- so the scanner is only getting back a banner of “Server: Apache“. Just in case, the scanner will then report each and every vulnerability ever found on the Apache HTTP server - once again, without providing any kind of indication that those results are unverified and may be false positives.
10) “Yesterday was yesterday -- today is a new day”
A cause of much frustration are those scanners that change their results between signature updates. We understand that security scanner vendors, like everyone else, strives to provide the best possible product -- which includes the tuning of signatures, bug fixing, improving detection logic, etc. And due to that, between software releases, some signatures may not trigger anymore on a given scenario. But how can it be that the same software, under the same test conditions, reports a set of vulnerabilities on a Monday (when using signature update X), and an almost completely different set of vulnerabilities on Wednesday, when using signature update X+1 ? True story: during a 30-day period, we run the same security scanner, using the same product version, same configuration, against the same Cisco product. The only thing that changed was the signature level -- on approximately 10 different scans during that 30-day period, about 90% of the vulnerabilities reported would change between a scan session and the next. Not good.
11) “I just know it in my heart to be true”
Vulnerabilities that can only be exploited at Layer 2 (ie: Etherleak), being reported when scanner and target are multiple hops away. How does that work? A vulnerability on Linux that only applies when you’re using a specific video driver -- how can that be part of the report, if your product cannot even determine if the driver is installed? Again -- maybe an UNVERIFIED flag ?
12) “Random vulnerability reporting: ENABLED”
I don’t know how else to name this category. The scanner, as an example, reports “IP ID value is always zero”. Ok. We try to reproduce in the lab. We go through the source code. We verify there’s no intermediate device on the customer network that may be playing with the IP ID. And the IP ID is never zero. Where does that come from ? Please see (1)
Tips for vulnerability scanners users:
1) Understand the inherent limitations of security scanners.
Easy as that
2) If it is a true positive -- we will fix it. But it may not happen overnight
Do NOT run the vulnerability scanner the day before deploying the device in production, and expect to have a fix within 24 hrs. We need to reproduce the issue. We need to engage the product development team. They need to understand the root cause and come up with a fix. Once we have a fix -- we need to retest to make sure we indeed fixed the issue. And then we need to perform additional testing (at least -- regression and QA) on the whole product. That all takes time.
3) Help us help you
Trust me: we really, really want to help you. But in many cases we will need you to provide additional information: network setup, packet captures, device configuration, scanner configuration, etc. Work with us -- if you cannot help us reproduce the issue, we cannot fix it.
4) Send a link to this post to your security scanner vendor
Tell them to read the section Tips for companies developing and selling vulnerability scanners
That is about it. Wow, it ended being longer than I expected. Hope you didn’t fall asleep halfway