Putting the ‘C’ in Gartner’s CARTA
As we get ready for the Gartner IT Symposium/Xpo in Orlando, we’ve been thinking more about every element and imperative in their CARTA model: Continuous Adaptive Risk and Trust Assessment. Since ‘C’ also stands for Cisco, let’s start there.
Gartner uses the word “continuous” in a lot of places, including in their seven imperatives. It’s a reaction to the former practice of using what they call “one-time security gates”: you made a decision based on a static set of information (such as a source IP address or a username and password combination), and then you never revisited it. We know that this practice isn’t sufficient to maintain the proper level of trust. Trust is neither binary nor permanent: you don’t trust something or someone to do everything, and you don’t trust forever. Based on the changing nature of risk and the environment, you have to check more than once.
How often do you need to check, and does “continuous” really mean “all the time”? It depends on what you’re checking, what actions you’re taking based on those checks, and how both of those actions affect the system itself (users, applications, devices, networks and so on). Let’s take a look at a chart that Sounil Yu, formerly at Bank of America, devised for the purposes of identifying all the different ways that authentication can happen:
As you can see, a device can authenticate to a network using network access control; to an application using a client-side certificate; and to data with an encryption key. There are many opportunities to authenticate, but should you use all of them? If you try to make a user do all of the steps in the bottom row — authenticate to the device, the application, the network, the data — then you’re going to have a very cranky user. Continuous authentication, if you want to use it, has to be hidden from the user except at times when your estimation of risk really needs the user’s active participation.
On the other hand, devices and systems don’t mind continuous authentication, so doing the continuous checking and verification isn’t as disruptive. And continuous monitoring is fine, as long as you know what you’re going to do with all the data that’s generated. Can you interpret and respond to an ongoing stream of event data? Can you automate that response? If so, great; if not, you’ll end up throttling that “continuous” monitoring to produce the key data that you can actually use.
Gartner’s CARTA Imperative Number Two says “Continuously Discover, Monitor, Assess and Prioritize Risk — Proactively and Reactively.” How often do you do all of these things? Near real-time discovery of users and assets is the ideal state, and there are various ways to accomplish it. Continuous monitoring is (hopefully) a given. The tricky parts are assessment and prioritization, which often need a human to incorporate business context. For example, getting a login request from an unusual location could be a high risk, unless you already know that the employee using that account is really traveling there.
An organization needs to design its monitoring, analysis and actions around risk, but with tradeoffs against what the humans in the equation can reasonably support. How long can you let a successfully authenticated application session last before you start worrying that the user is no longer who you thought they were? Two hours? Eight hours (a typical working day)? A week? Can you force the user to re-authenticate just once through a single sign-on system, or will they have to log back into several applications? The answers can determine how frequently you carry out that “continuous” verification.
What events will cause you to revise your risk estimation and require fresh verification? It might be a request for a sensitive or unusual transaction, in which case you might resort to step-up authentication and kick off an extra permission workflow. It could be the release of a new security patch, so that you want to force all users to update before they can renew their access. Or it could be contact with an asset that is now known to be compromised, and you have to reset everything you knew and trusted about the application and its processes.
Your risk and trust assessments should be adaptive, but they shouldn’t be gratuitously continuous. They should be as often as your risk models require, and only as frequent as you can handle. Balancing controls against usability is the great challenge before us today.
Learn more about Cisco’s Zero Trust approach during Wendy’s talk on October 21 at 1:00 p.m. ET at Gartner IT Symposium/Xpo in Orlando, FL, which takes place at Walt Disney World Swan and Dolphin Resort.