What is a Zero-day Vulnerability?

November 7, 2012 - 6 Comments

We often hear about a dramatic class of vulnerabilities referred to as “zero-days” or “0 days,” “0-days,” or “0days” which can be pronounced as “zero days” or “oh days.” I have seen a number of email threads and blog posts lately that seem to refer to vulnerabilities in this class in varying and vastly different ways. This caused me to ask myself: what exactly is a zero-day vulnerability?

Emotion around zero-days can be high. This is predominantly because vulnerabilities with this label are perceived to be of greater impact and urgency. That is often correct and fair. However, there is at least one other reason for heightened energy around these issues: many teams and organizations have special service level agreements or informal expectations levied upon them in “outbreak” or “zero-day” scenarios. Imprecise use of the zero-day label can mix with these expectations to needlessly increase the urgency—and corresponding organizational disruption—of a vulnerability in these situations.

So what are the critical characteristics that set apart a zero-day from another, seemingly important and urgent vulnerability? In my opinion there are three characteristics that have garnered these vulnerabilities the urgency they hold; and if any one of these is not present the vulnerability it is not a zero-day.

I’d like to dive into each of those items individually. The vulnerability must be unfixed, or perhaps more correctly, there must not be an official and publicly available fix for the vulnerability. History has shown us that some zero-days were indeed known to the source vendor beforehand even if their testing, release, or disclosure processes prevented them from making fixed software available prior to the vulnerability reaching zero-day infamousness. Regardless, if official and public fixed software is available, it’s not a zero-day.

As a matter of practicality, once a vulnerability has been correctly labeled a zero-day it generally carries that label until the vulnerability is no longer relevant or until its relevance has been greatly diminished. The public release of fixed software does not magically change the myriad of blog posts, documents, and alerts that refer to the “critical vulnerability in foo” as a zero-day. It would be handy if it did, however.

Second, there must be working exploit code for a vulnerability to qualify as a zero-day. Now the tricky part here is “working exploit,” where “working” is entirely dependent on the vulnerability and the context in which it is exploited. The most common counter-example of a working exploit is proof-of-concept code that produces a crash without executing code on the vulnerable device. However, whether or not a “working exploit” is indeed “working” must be compared to the situation and how an attacker might use the exploit. An exploit that executes code in a browser is certainly a “working exploit,” as are exploits that crash remote routers or, perhaps, crash the client application of a physical security system in such a way as to allow a physical theft.

It is also important to note that there is no requirement that the exploit code be either widely available or available to you, but simply that someone outside the software’s source vendor has a working exploit. Academically, this removes the chances that a vulnerability is only “theoretically exploitable” and makes possible exploitation a reality. Practically and in today’s world, exploits are generally created to be used and we can assume that a functional exploit will be used somewhere against someone.

Last, there must be external knowledge of the vulnerability for it to warrant the zero-day moniker. The key differentiator here is “external;” in order for the vulnerability to meet this requirement there must be knowledge of the vulnerability by at least one person outside the company or organization that produces the vulnerable software. If the vulnerability is only known by people at the software’s source and no one else, it is not a zero-day.

A friend of mine has rightly pointed out that this “external knowledge” may not be so cut-and-dry. The example he posed to me was that of vulnerabilities that have been privately reported to a software’s source vendor. In those cases, the reporter is outside the vendor, and using my definition above, this would constitute “external knowledge.” Are these privately reported vulnerabilities also zero-days if they meet the unfixed and working exploit requirements? I believe they are, even if they are not causing security-folk, the SANS Internet Storm Center or CNN to sound the alarms.

I will fully admit that in writing this post I had to nail-down precisely what I considered to be a zero-day. My uncertainty primarily centered around one question: must there be active exploitation of the vulnerability to meet the requirements of being a zero-day? After thinking about it, I don’t think active exploitation is required for a zero-day to be a zero-day. Instead, I think it is the potential for active exploitation that makes a vulnerability a zero-day, and that such potential is predicated on external knowledge and a working exploit.

I value looking at things negatively, perhaps because I have been a happily self-labeled “security person” for a while now or perhaps I’m a natural pessimist. To that end, what is expressly not required for a vulnerability to become a zero-day? These should seem redundant now if I’ve accomplished my goals above:

  • Exploiting the vulnerabilities need not obtain root or administrative privileges
  • Successful exploitation need not allow remote code execution

I’d like to know what you think! Does every vulnerability you would label a zero-day meet at least the requirements above?


In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. If I understand it there just has to be a working exploit. In the latest Microsoft IE exploit that happened in September I do not believe the discover was malicious. From what I understand the exploit was found by a researcher who went to Metasploit to develop a “working exploit”. Metsploit developed the exploit and reported the vulnerability as a 0day exploit/vulnerability.

  2. Hi, Richard, thanks for the note.

    I think that if these backdoors are themselves vulnerabilities because they would allow an attacker to do something beyond what should be allowed, than yes, they are zero-day vulnerabilities if they meet the tests above.

    Would you agree?

    thanks again

    • Tim,
      I like your quick response, quite fascinating also. But I’d disagree a bit. lol

      Let’s look at the “intentions” of both “holes” Although, I’m not an expert on zero day attack, but I think zero day attack exploits the fact developers do not know about the vulnerabilities. So yes, an attacker can exploit it and probably for a long time before developers even figure out how to block it.

      Now, “backdoor” was “designed” by developers to allow them bypass some authentication process when there’s a need for it. Developers must know how to quickly block/fix the backdoor if there’s a need for it. Have you seen the movie Knight Rider 2008?

      So, my point is backdoors are deliberate with possible immediate remedy but zero day vulnerability is not deliberate, possibly unknown to developer and the fix is actually unknown.

      Backdoors becomes vulnerability when known externally. Zero day holes are 100% vulnerabilities because they are not known even internally but externally. But there’s a thin line between them.

      Well, again Tim, I’m not an expert, my researcher mind is just active. lol

      • Thanks, Richard.

        I believe the developer’s intentions and knowledge of the issue are not relevant; and rather that the security exposure present to users is the driving factor in determining what is a vulnerability.

        The three traits of a zero-day seem to hold true here since the majority of backdoors could be rightly labeled vulnerabilities.

  3. Tim, I understand that zero-day attacks occur when malicious users exploit unfixed security holes (Unfixed vulnerability) by developing tools (Working exploit) that take advantage of the vulnerabilities and then share the code or hack among other hackers (external knowledge)

    Now, when developers build new tools, I understand they sometimes build “backdoor” into the algorithms. Would you say when this backdoor becomes known and utilized publicly, it becomes a zero day attack?

    Note that the developers deliberately built the backdoor into the software.