About a month ago, there was a coordinated disclosure on a flaw in TCP which affected a number of vendors, including Cisco. As is often the case when a vulnerability is disclosed in a widely-deployed technology such as TCP, it’s in the best interests of customers and the industry alike that everyone agrees on a common solution to the issue, as well as a date and time of disclosure. In this most recent event, the issue was first reported over a year ago — so what took vendors so long to formally address the flaw?
The answer is not complex, but it does merit explanation. There are multiple factors which can affect the temporal aspect of a vulnerability disclosure. For example, as the number of affected products and vendors increase in size, so does the ‘number of cats that need to be herded.’ Frankly, this is often frustrating for the researcher(s) who discovered the flaw in the first place, as the time between initial report of the vulnerability and coordinated disclosure by the vendors feels unnecessarily long. So what could possibly take a year? In no particular order of importance, consider the following variables:
- How many vendors are affected? Usually, issues which affect more than a single vendor are coordinated through a (neutral) third party. In this particular instance, CERT-FI and JP-CERT were the coordinators. They are responsible for helping to identify and contact the affected vendors, including mediation between them to arrive at an agreed-upon date for public disclosure. Personally, I believe these coordinators perform an invaluable (and often thankless) service for which we are all grateful.
- How many products are affected? Vendors with multiple products and/or multiple versions of software will require more time to test, patch, and verify the fix than would a vendor who has only a handful of vulnerable products or releases. It’s worth mentioning that a large number of things to fix does not always equate to the greatest delay; the overall responsiveness of the vendor, regardless of the scope of products or releases affected ultimately shapes how much time is required. As noted above: we’re all asked to hold our disclosures until everyone is ready (if possible). That could mean that the code fix has been included in a previous release even though disclosure of the issue has not yet occurred!
- What time line has been requested by the researcher? Most will work with the coordinator to release their full findings at the same time as vendors disclose the flaw, as this minimizes the potential for exploitation. In the case of the issue discovered and reported by Outpost24, there was a controlled release of information: they found something significant but did not give away much detail; they presented more information at a conference but still omitted key details that would allow someone to create malicious code, etc. This is a sound approach to motivating vendors to address the issue promptly without endangering their customers.
- How responsive is the coordinator? As I stated earlier, coordination of industry-wide security events is much like herding cats. Even so, the ‘herder’ has to remain engaged and perform many of the same project management functions that are required to get any product to marketplace: set (and revise) the timeline based on timeliness of deliverables, coordinate multiple teams, etc.
These are what I believe are the key factors that ultimately influence the timeline of disclosure for industry-wide vulnerabilities. If you think I’ve missed something or have a different view on the subject, I’d like to hear from you.