In this last part of this series I will discuss the top customer priority of visibility. Cisco offers customers the ability to gain insight into what’s happening in their network and, at the same time, maintain compliance and business operations.
But before we dive into that let’s do a recap of part two of our series on Cisco’s Secure Data Center Strategy on threat defense. In summary, Cisco understands that to prevent threats both internally and externally it’s not a permit or deny of data, but rather that data needs deeper inspection. Cisco offers two leading platforms that work with the ASA 5585-X Series Adaptive Security Appliance to protect the data center and they are the new IPS 4500 Series Sensor platform for high data rate environments and the ASA CX Context Aware Security for application control. To learn more go to part 2 here.
As customers move from the physical to virtual to cloud data centers, a challenge heard over is over is that they desire to maintain their compliance, security, and policies across these varying instantiations of their data center. In other words, they want to same controls in the physical world present in the virtual – one policy, one set of security capabilities. This will maintain compliance, overall security and ease business operations.
By offering better visibility into users, their devices, applications and access controls this not only helps with maintaining compliance but also deal with the threat defense requirements in our overall data center. Cisco’s visibility tools gives our customers the insight they need to make decisions about who gets access to what kinds of information, where segmentation is needed, what are the boundaries in your data center, whether these boundaries are physical or virtual and the ability to do the right level of policy orchestration to maintain compliance and the overall security posture. These tools have been grouped into three key areas: management and reporting, insights, and policy orchestration.
At Interop this week, Cisco unveiled its new Netflow Generation Appliance (NGA) 3140, which establishes a new standard for high-performance, cost-effective solutions for flow visibility. It empowers network operations, engineering, and security teams with actionable insight into network traffic for the purpose of resource optimization, application performance improvement, traffic accounting, and security needs.
Cisco NGA customer, Human Kinetics conducts online certification courses and tests for health and fitness professionals, and offers print and multimedia content such as videos, ebooks, apps on tablets, and other downloadable material. “We needed comprehensive information about our network to keep our content protected, secure our site against disruption, and deliver excellent, reliable performance,” says Brad Trankina, director of network and information systems at Human Kinetics. “Comparing Cisco NGA to what we had just a few months ago is like comparing our network today to the 3Com hubs we had ten years ago. It’s like a night and day difference.” Read More »
Organizations implementing Continuous Monitoring strategies are remiss if they are not taking into account the value of network telemetry in their approach. NIST Special Publication 800-137, Information Security Continuous Monitoring for Federal Information Systems and Organizations provides guidance on the implementation of a Continuous Monitoring strategy, but fails to address the importance of network telemetry into that strategy. In fact the 38 page document only mentions the word “network” 36 times. The SP 800-137 instead focuses on two primary areas: configuration management and patch management. Both are fundamental aspects of managing an organizations overall risk, but to rely on those two aspects alone for managing risk falls short of achieving an effective Continuous Monitoring strategy for the following reasons
First, the concepts around configuration and patch management are very component specific. Individual components of a system are configured and patched. While these are important the focus is on vulnerabilities of improper configuration or known weaknesses in software. Second, this approach presumes that with proper configuration control and timely patch management that the overall risk of exploitation to the organization’s information system is dramatically reduced.
While an environment that has proper configuration and patch management is less likely to be exposed to known threats, they are no more prepared to prevent or detect sophisticated threats based on unknown or day-zero exploits. Unfortunately, the customization and increase in sophistication of malware is only growing. A recent threat report indicated that nearly 2/3 of Verizon’s data breach caseload were due to customized malware. It is also important to keep in mind that there is some amount of time that passes between a configuration error is determined and fixed or the time it takes to patch vulnerable software. This amount of time can potentially afford an attacker a successful vector. For these reasons organizations looking to implement a Continuous Monitoring strategy should depend on the network to provide a near real-time view of the transactions that are occurring. Understanding the behavior of the network is important to create a more dynamic risk management focused Continuous Monitoring strategy.
Network telemetry can consist of different types of information describing network transactions in various locations on the network. Two valuable telemetry sources are NetFlow and Network Secure Event Logging (NSEL). NetFlow is a mechanism that organizations can use to offer a more holistic view of the enterprise risk picture. NetFlow is available in the majority of network platforms and builds transaction records of machine-to-machine communications both within the enterprise boundary as well as connections leaving the enterprise boundary. These communication records provide invaluable information and identify both policy violations and configuration errors. Additionally, NetFlow also provides insight into malicious software communications and large quantities of information leaving an enterprise. Network Secure Event Logging uses the NetFlow protocol to transmit important information regarding activities occurring on enterprise firewalls. This is valuable data that can be aggregated with other NetFlow sources to bring additional context to the network behavior occurring.
Coupling the configuration and patch management guidance in SP 800-137 with an active NetFlow monitoring capability will provide organizations with a Continuous Monitoring strategy that is more system focused and more apt to fostering a dynamic risk management environment. Cisco will be discussing NetFlow, NSEL and other security topics at the March 21st, Government Solutions Forum in Washington, D.C. If you’re interested in learning more, click on the following URL:
I love my job, but I really don’t enjoy my commute….and the unpredictable traffic. Living on the west side of San Francisco and working on the east side of San Jose, Google Maps tells me my journey is a hefty 47.2 miles and 1 hour and 1 minute (without traffic.) Holidays, rain, and accidents can add minutes and sometimes hours.
Twice a day, to and from work, I start asking the questions:
How busy is it on the road right now? Is the road full of tired commuters, semis, or concert traffic?
Which lane should I be in? If I’m in the fast lane, what are the odds of it coming to a screeching halt while I watch the other three lanes go by?
Do I need to detour to another interstate or highway due to an accident or concert?