Cisco Blogs


Cisco Blog > Security

A FAIR Way to Assess Security Risk

Last year was in many ways a crux year for information security, and I can vividly remember myriad conversations with colleagues from Cisco and other companies at RSA 2011 about the then recent spate of compromises and incidents. Although the intense media focus on high-profile compromises arguably seems to have abated somewhat in early 2012, that doesn’t mean that the threat landscape has changed for the better – if anything, it has become even more complex, a fact highlighted in Cisco’s 2011 Annual Security Report and 2Q11 Global Threat Report, both available from www.cisco.com/security.

As the manager of Cisco’s Security Posture Assessment (SPA) team, I have seen an overall improvement in our customers’ security postures over the past year as organizations have been forced to adapt to this threat landscape, but that doesn’t mean that we can become complacent. As our customers’ postures improve, so do the attackers’ techniques, and the information security arms race continues…

All of these changes have me thinking about something that unfortunately hasn’t changed: the way that we, as an industry, think about risk. As a group, our conception of risk continues to be static and deterministic, based in models such as Annualized Loss Expectancy (ALE). For those of you who aren’t familiar with it, the Annualized Loss Expectancy (ALE) is the product of Single Loss Expectancy (SLE) and the Annualized Rate of Occurrence (ARO). In other words, if the monetary loss from a single occurrence of an information security incident (SLE) is multiplied by the number of times that we can expect this loss to occur (ARO), then we know total the amount of loss that we can expect to incur in a year (ALE). I’m simplifying this a bit, but that is the general gist of the ALE model. The idea is that you can use the this model to determine the amount of money that you should spend on mitigation -- if the ALE is $100,000 USD (for instance), then it doesn’t make sense to spend more than this on mitigation.

If only the world were so simple. The way we usually use ALE is deterministic, which means we plug just one number into the equation and use it to represent reality. Often this is an estimate by a subject matter expert (like a security administrator) which is to say it’s really an informed guess. Or it’s the average of the informed guesses of a group of experts that you’ve gathered. These predetermined estimates are really opinions, but when they hide themselves as numbers in an equation they can camouflage themselves as objective facts. Stochastic models use probability distributions to make estimates. Instead of rolling a pair of dice and saying “my expectancy is to roll a seven, because that’s the most common number rolled” a stochastic model considers the likelihood of anything from a 2 to a 12 and models accordingly. Were ALE to be modeled stochastically, the expectancies resulting from the model would be ranges instead of numbers and would reflect the unlikely (but still possible) outcomes of a breach in addition to just the expected or likely outcomes. Of course when your input is opinion, either model can be way off. So gathering and using historical data to build these models is critical.

Don’t get me wrong: the ALE model is far better than just sticking your finger in the air and guessing at the risks to your assets. But what is really needed is a non-deterministic model for risk analysis that can demonstrate the variability in the factors that contribute to risk and provide us with tangible results that we can use to make better information security decisions. Luckily, just such a model exists in Factor Analysis of Information Risk (FAIR). FAIR comprises a much more detailed taxonomy of risk factors than exists in the ALE model and provides a mechanism for modeling these factors in a manner that recognizes that none of the factors is reducible to a single number. We’ve been using FAIR in scenario-based risk assessments to provide our customers with a more nuanced picture of the true risks posed to their information systems assets and operations, and the results have been enlightening. We’ve found that many of our customers -- and this is true across verticals -- have no formal method for assessing risk. Even the ALE model, for all of its drawbacks, isn’t widely deployed. The field is open for something new…

But risk assessment is just the first step. Assessments are typically point-in-time; they are based on data that may be accurate at the time when we conduct the assessment but that is also changeable. The results of a point-in-time risk assessment are often revealing, but ultimately aren’t going to help unless they are part of a formal risk management program. A risk management program provides a framework for using the results of recurring assessments in a proactive manner to help our customers make better information security decisions.

There is just too much behind FAIR to do it adequate justice in a blog post, so I encourage you to check it out. I am around RSA 2012, so look for me on the show floor or at Cisco’s customer event tonight and I’ll be happy to talk with you about FAIR, risk, penetration testing, recent incidents, or just about any other information security related topic.

Tags: ,

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

1 Comments.


  1. FAIR is indeed an interesting approach. I appreciate that it is well explained at the website.

    I’m very intrigued by Cisco’s Rapid Risk approach, which evidently takes still a different tack. From what I’m able to piece together (from here: http://www.cisco.com/web/about/security/intelligence/risk-triage-whitepaper.html ) ultimately this approach seems to aim at taking the “worry temperature” of the appropriate business-side Info Owners, rather than trying to dress up a guess about threat likelihood.

    I sense Rapid Risk may be the better approach. Other InfoSec risk approaches try to emulate Enterprise Risk Management that groups like Safety might use. Trouble is, ERM is using estimates based on historical data to estimate the likelihood of those same things occurring, and have proven to be relatively accurate.

    OTOH, InfoSec is guessing about the likelihood of Black Swan Events, which by definition, are anything but accurate because they are unknown.

    Anyway, I’m straying. Would you have more info on Rapid Risk that you could share? I’m interested in the questions you use and the approach you take in the meetings, what works, what doesn’t work so well, etc.

       1 like