Cisco Blogs


Cisco Blog > Data Center and Cloud

The Golden Key to Any Cloud

As cloud technology and organizations mature, customers are shifting their focus from the provisioning of individual servers to richer cloud-based application platform stacks. Why? Servers usually do not exist as standalone entities but are designed to run something tangible for the business. For example, multi-tier application platform stacks have in their design multi-server elements such as database, application and web servers.

In this era of the cloud, creating golden templates for each of the elements required to configure these multi-tier stacks and the servers they reside on, is not only unwieldy for IT to maintain and manage but they are monolithic. This means if one single element changes, the whole golden image needs to be revised. Golden images are not configurable and frequently require additional manual configuration to complete installation.

What’s the solution? It begins with the concept of DevOps.

DevOps is a software development method that permits better collaboration between software development and IT operations in a way that these multi-tier application servers can be consumed in the cloud without human intervention. There are a number of disciplines included under the DevOps category, but this blog will be focusing on configuration management.

Puppet and Chef are two of the leading configuration management vendors in the DevOps segment delivering the following benefits:

• Elastic and continuous configurations
• Increased productivity to handle hundreds to thousands of nodes
• Improve IT responsiveness by reducing time to deploy changes
• Eliminate configuration drift and reduce outages

There is a lot of buzz about this capability. How much buzz? Watch this video from CiscoLive Orlando.

Within the next month, Cisco will be releasing a cloud accelerator that delivers configuration management of multi-tier application stacks. Based on the TOSCA-modeled graphical user interface, customers utilize a canvas that simplifies the design of these stacks into templates. Each element: server, network device and storage; is represented on the canvas with a graphical icon. Behind each icon are configuration details for each component. For example, network device configuration may include firewall rules and load balancing algorithms. For servers, Cisco is leveraging Puppet and Chef or home-grown scripts. The result is a blueprint that allows for consumption of the complete application stack by end users, on demand, delivered by the cloud.

So now we have blueprints. Where’s the real advantage?

Cisco Intelligent Automation for Cloud (IAC) is the golden key that gives you the advantage because it unlocks this new approach to cloud efficiency. Providing blueprints for multi-tier application stacks on their own do nothing if they cannot be ordered by customers from a standardized menu of services and acted upon by an orchestrator to automatically deploy the entire configuration. Extending functionality for DevOps is just another example of Cisco IAC’s ability to go beyond IaaS without requiring a solution rip and replace or major push-ups by customers.

Why just provision servers and continue to increase IT costs with manual “last mile” provisioning?
Cisco IAC and the configuration management accelerator simplify the delivery of multi-tier application stacks through self-service ordering and repeatable delivery. Cloud accelerators are designed to follow the vision and strategy of Cisco IAC eliminating code islands that become problematic when you upgrade to the next generation Cisco IAC edition.

To browse through the current cloud accelerators, go here. First time visitors will need to sign the register.

If you would like to learn more or comment, tweet us at: http://twitter.com/ciscoum

Tags: , , , , , , , , , , , ,

Evolving Continuous Monitoring to a Dynamic Risk Management Strategy

Organizations implementing Continuous Monitoring strategies are remiss if they are not taking into account the value of network telemetry in their approach. NIST Special Publication 800-137, Information Security Continuous Monitoring for Federal Information Systems and Organizations provides guidance on the implementation of a Continuous Monitoring strategy, but fails to address the importance of network telemetry into that strategy. In fact the 38 page document only mentions the word “network” 36 times. The SP 800-137 instead focuses on two primary areas: configuration management and patch management.  Both are fundamental aspects of managing an organizations overall risk, but to rely on those two aspects alone for managing risk falls short of achieving an effective Continuous Monitoring strategy for the following reasons

First, the concepts around configuration and patch management are very component specific. Individual components of a system are configured and patched. While these are important the focus is on vulnerabilities of improper configuration or known weaknesses in software. Second, this approach presumes that with proper configuration control and timely patch management that the overall risk of exploitation to the organization’s information system is dramatically reduced.

While an environment that has proper configuration and patch management is less likely to be exposed to known threats, they are no more prepared to prevent or detect sophisticated threats based on unknown or day-zero exploits. Unfortunately, the customization and increase in sophistication of malware is only growing. A recent threat report indicated that nearly 2/3 of Verizon’s data breach caseload were due to customized malware. It is also important to keep in mind that there is some amount of time that passes between a configuration error is determined and fixed or the time it takes to patch vulnerable software. This amount of time can potentially afford an attacker a successful vector.  For these reasons organizations looking to implement a Continuous Monitoring strategy should depend on the network to provide a near real-time view of the transactions that are occurring. Understanding the behavior of the network is important to create a more dynamic risk management focused Continuous Monitoring strategy.

Network telemetry can consist of different types of information describing network transactions in various locations on the network. Two valuable telemetry sources are NetFlow and Network Secure Event Logging (NSEL). NetFlow is a mechanism that organizations can use to offer a more holistic view of the enterprise risk picture. NetFlow is available in the majority of network platforms and builds transaction records of machine-to-machine communications both within the enterprise boundary as well as connections leaving the enterprise boundary. These communication records provide invaluable information and identify both policy violations and configuration errors. Additionally, NetFlow also provides insight into malicious software communications and large quantities of information leaving an enterprise. Network Secure Event Logging uses the NetFlow protocol to transmit important information regarding activities occurring on enterprise firewalls. This is valuable data that can be aggregated with other NetFlow sources to bring additional context to the network behavior occurring.

Coupling the configuration and patch management guidance in SP 800-137 with an active NetFlow monitoring capability will provide organizations with a Continuous Monitoring strategy that is more system focused and more apt to fostering a dynamic risk management environment. Cisco will be discussing NetFlow, NSEL and other security topics at the March 21st,  Government Solutions Forum in Washington, D.C. If you’re interested in learning more, click on the following URL:

www.cisco.com/go/gsf

Tags: , , , , , , , , ,