We often hear about a dramatic class of vulnerabilities referred to as “zero-days” or “0 days,” “0-days,” or “0days” which can be pronounced as “zero days” or “oh days.” I have seen a number of email threads and blog posts lately that seem to refer to vulnerabilities in this class in varying and vastly different ways. This caused me to ask myself: what exactly is a zero-day vulnerability?
Emotion around zero-days can be high. This is predominantly because vulnerabilities with this label are perceived to be of greater impact and urgency. That is often correct and fair. However, there is at least one other reason for heightened energy around these issues: many teams and organizations have special service level agreements or informal expectations levied upon them in “outbreak” or “zero-day” scenarios. Imprecise use of the zero-day label can mix with these expectations to needlessly increase the urgency—and corresponding organizational disruption—of a vulnerability in these situations.
So what are the critical characteristics that set apart a zero-day from another, seemingly important and urgent vulnerability? In my opinion there are three characteristics that have garnered these vulnerabilities the urgency they hold; and if any one of these is not present the vulnerability it is not a zero-day.
I’d like to dive into each of those items individually. The vulnerability must be unfixed, or perhaps more correctly, there must not be an official and publicly available fix for the vulnerability. History has shown us that some zero-days were indeed known to the source vendor beforehand even if their testing, release, or disclosure processes prevented them from making fixed software available prior to the vulnerability reaching zero-day infamousness. Regardless, if official and public fixed software is available, it’s not a zero-day.
As a matter of practicality, once a vulnerability has been correctly labeled a zero-day it generally carries that label until the vulnerability is no longer relevant or until its relevance has been greatly diminished. The public release of fixed software does not magically change the myriad of blog posts, documents, and alerts that refer to the “critical vulnerability in foo” as a zero-day. It would be handy if it did, however.
Second, there must be working exploit code for a vulnerability to qualify as a zero-day. Now the tricky part here is “working exploit,” where “working” is entirely dependent on the vulnerability and the context in which it is exploited. The most common counter-example of a working exploit is proof-of-concept code that produces a crash without executing code on the vulnerable device. However, whether or not a “working exploit” is indeed “working” must be compared to the situation and how an attacker might use the exploit. An exploit that executes code in a browser is certainly a “working exploit,” as are exploits that crash remote routers or, perhaps, crash the client application of a physical security system in such a way as to allow a physical theft.
It is also important to note that there is no requirement that the exploit code be either widely available or available to you, but simply that someone outside the software’s source vendor has a working exploit. Academically, this removes the chances that a vulnerability is only “theoretically exploitable” and makes possible exploitation a reality. Practically and in today’s world, exploits are generally created to be used and we can assume that a functional exploit will be used somewhere against someone.
Last, there must be external knowledge of the vulnerability for it to warrant the zero-day moniker. The key differentiator here is “external;” in order for the vulnerability to meet this requirement there must be knowledge of the vulnerability by at least one person outside the company or organization that produces the vulnerable software. If the vulnerability is only known by people at the software’s source and no one else, it is not a zero-day.
A friend of mine has rightly pointed out that this “external knowledge” may not be so cut-and-dry. The example he posed to me was that of vulnerabilities that have been privately reported to a software’s source vendor. In those cases, the reporter is outside the vendor, and using my definition above, this would constitute “external knowledge.” Are these privately reported vulnerabilities also zero-days if they meet the unfixed and working exploit requirements? I believe they are, even if they are not causing security-folk, the SANS Internet Storm Center or CNN to sound the alarms.
I will fully admit that in writing this post I had to nail-down precisely what I considered to be a zero-day. My uncertainty primarily centered around one question: must there be active exploitation of the vulnerability to meet the requirements of being a zero-day? After thinking about it, I don’t think active exploitation is required for a zero-day to be a zero-day. Instead, I think it is the potential for active exploitation that makes a vulnerability a zero-day, and that such potential is predicated on external knowledge and a working exploit.
I value looking at things negatively, perhaps because I have been a happily self-labeled “security person” for a while now or perhaps I’m a natural pessimist. To that end, what is expressly not required for a vulnerability to become a zero-day? These should seem redundant now if I’ve accomplished my goals above:
- Exploiting the vulnerabilities need not obtain root or administrative privileges
- Successful exploitation need not allow remote code execution
I’d like to know what you think! Does every vulnerability you would label a zero-day meet at least the requirements above?