Cisco Blogs


Cisco Blog > Data Center and Cloud

The Benefits of an Application Policy Language in Cisco ACI: Part 4 – Application Policies for DevOps

October 21, 2014 at 5:00 am PST

[Note: This is the last installment of a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not.  Part 1 | Part 2 | Part 3]

As noted earlier in this series, modern DevOps applications such as Puppet, Chef, and CFEngine have already moved toward the declarative model of IT automation, so there is already some obvious synergy between DevOps and the Cisco ACI policy model. DevOps automation products are also optimizing application delivery processes and are designed to automate critical IT tasks to make the organization more agile and efficient.

In an early 2014 blog post, Andi Mann, vice president of strategic solutions at CA Technologies, wrote about the evolution to DevOps and the synergy with the Cisco ACI policy model:

Though the DevOps approach of today—with its notable improvements to culture, process, and tools—certainly delivers many efficiencies, automation and orchestration of hardware infrastructure has still been limited by traditional data center devices, such as servers, network switches and storage devices. Adding a virtualization layer to server, network, and storage, IT was able to divide some of these infrastructure devices, and enable a bit more fluidity in compute resourcing, but this still comes with manual steps or custom scripting to prepare the end-to-end application infrastructure and its networking needs used in a DevOps approach.

The drag created by these traditional application infrastructures has been somewhat reduced by giving that problem to cloud providers, but in reality this drag never really went away until Cisco innovated application-centric programmability with Cisco ACI. This innovative new solution is now poised to greatly benefit the whole application economy, especially management of the DevOps application environment…

Read More »

Tags: , , , , , , , , ,

SP Network Transformation at the Cisco Live Cancun World of Solutions

October 21, 2014 at 1:30 am PST

SP Canun CL 10.21

By Igor Dayen, Manager, SP Product and Solutions Marketing  Igor-Dayen

The excitement starts on November 3rd in Cancun, Mexico where Cisco is holding our next Cisco Live event. A great opportunity for the service provider community to study with industry experts, get inspired, and understand how Cisco’s Open Network strategy can fast track their growth.  In the World of Solutions the SP booth is hosting numerous demos and live equipment which tell the story of how Cisco is helping carriers address their business requirements.

Cisco Live is known for the extensive number of breakout technical sessions, and Cancun will be no different. Hot topics such as NFV and SDN will be extensively covered.

 

Cancun CL sessions 10.21

So what can you expect when you “get behind the driver’s seat” and take the tour of the SP booth?   It is the world of the open network. You will appreciate the power of the Evolved Programmable Network built on latest innovations such as Autonomic Networking and nV technology.  Both of these technologies create a true zero-touch provisioning in the Access network infrastructure.   When put to work, they significantly simplify network operations and reduce the number of truck rolls.

As far as optical networking is concerned, we are showcasing advances in ROADM technology and the Optical Network Transport.  With NCS 2000 ROADM we will demonstrate innovations critical to the flexible and dynamic needs of networks and what new capabilities are on the horizon. Our NCS 4000 story will depict convergence between Packet, Optical Transport Network (OTN) and wavelength-division multiplexing (WDM).  This convergence solution is the foundation for business flexibility, enabling network operators to support traditional subscribers, wholesalers, and data center interconnect all in one.

All of the above-mentioned resources are successfully orchestrated by the Evolved Services Platform.  We will take you through and show how the ESP engine orchestrates and automates EPN resources.   We invite you to test drive the ESP framework and gain understanding on the benefits and capabilities. Not only will you learn to operate within this framework but you will also understand how the ESP allows end users and enterprises to focus on their core business and lower their OpEx in their own turn.

The best way to explain the ESP is to illustrate its power with important user cases, which is why we are bringing to Cancun four very different examples. First, Quantum virtualized Broadband Node(Q-vBN) Quantum virtualized Broadband Node offers individual device management with cloud-based QoS that is accessible by home network environment.  Q-vBN is our residential vCPE use case.  The second use case is Cloud DVR.  The ESP enables you to scale and accelerate the delivery of any type of content, over any network to any device, removing the need for a physical DVR located in customer homes.  Third is virtualized Managed Services which explains how Cisco ESP helps you rapidly create and automate the self-service, cloud-based delivery of managed network, and security services. This is our business use case of vCPE for Enterprise and SMB users.  Last but not least, with the QvPC example you will learn how virtualized mobile packet core redefines agility for mobile carriers.  You will find out how this solution, based on NFV and SDN, implements the functionality that helps SPs scale new packet core systems.

All-in-all, it will be a great show.  Regardless of where you are travelling from we promise to make the days exciting and full of learning for you. Sunny Mexico awaits you and we look forward to seeing you at Cisco Live Cancun!

 Tweet us @CiscoSP360 if you have any questions or comments!

Tags: , , , , , , , , , , ,

Optimize Your Software-Defined Network by Hardware Requirements

Software-based techniques are transforming networking. Commercial off-the-shelf hardware is finding a place in several networking use cases. However, high-performance hardware is also an important part of a successful software-defined networking (SDN). As you optimize your networks using SDN tools and complementary technologies such as network function virtualization (NFV), an important step is to strategically assess your hardware needs based on the functions and performance requirements. These need to be aligned with your intended business outcome for individual applications and services.

Two Categories of High Performance Hardware

  • Network hardware that utilizes purpose-built designs. These often involve specialized Application-Specific Integrated Circuit (ASIC)s to achieve significantly higher performance than what is possible or economically feasible using commercial off-the-shelf servers that are based on state of the art, x86-based, general purpose processors.
  • Network hardware that uses standard x86 servers that is enhanced to provide high performance and predictable operation for example, via special software techniques that bypass hypervisors, virtualization environments, and operating systems.

Where to Deploy Network Functions
Can virtualized network functions be deployed like cloud-based applications? No. There is a big difference between deploying network functions as software modules on x86 general purpose servers and using a common cloud computing model to implement network virtualization. Simply migrating existing network functions to general purpose servers without due regard to all the network requirements leads to dramatically uneven and unpredictable performance. This unpredictability is mainly due to data plane workloads being often I/O bound and/or memory bound and software layers containing important configuration details that may impact performance.
These issues are not specifically about hardware but how the software handles the whole environment. Operating systems, hypervisors, and other infrastructure that is not integrated into best practices for data plane applications will continue to contribute to unpredictable performance.

Bandwidth and CPU Needs

Optimization 10.20

A good way to begin to assess hardware requirements is to examine network functions in two dimensions: I/O bandwidth or throughput needs, and computational power needs. In considering which network function to virtualize and where to virtualize it, CPU load required and bandwidth load required throughout different layers of the network can help determine that some but not all network functions are suitable for virtualization.

Applications with lower I/O bandwidth and low-to-high CPU requirements may be most appropriate for virtualized deployment on optimized x86 servers. Applications with higher I/O bandwidth and low-to-high CPU requirements may be best deployed on specialized high-performance hardware with specialized silicon. Many other factors may play a role in determining what hardware to use for which applications, including cost, user experience, latency, networking performance, network predictability, and architectural preferences.
Service-Network Abstraction is Key
Additionally, you might not need high performance hardware for certain functions initially. But as such a particular function scales, it might require a high performance platform to meet its performance specifications, or it might be more economical on a purpose-built platform. So you might start out with commercial off-the-shelf hardware and then transfer the workload to the high performance hardware later. If you have focused on establishing a clean abstraction of the services from the underlying hardware infrastructure using SDN principles, the network deployment can be more easily changed or evolved independently of the upper services and applications. This is the true promise of SDN.
Read more about how to assess hardware performance requirements in your SDN in the Cisco® white paper “High-Performance Hardware: Enhance Its Use in Software-Defined Networking.” You can find it here: “Do You Know your Hardware Needs?” along with other useful information.

Do you have questions or comments? Tweet us at @CiscoSP360

Tags: , , , , , , , ,

The Benefits of an Application Policy Language in Cisco ACI: Part 3 – Group Policies

October 17, 2014 at 5:00 am PST

[Note: This is the third a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not. Part 1 | Part 2 | Part 4]

The Cisco ACI fabric is designed as an application-centric intelligent network. The Cisco APIC policy model is defined from the top down as a policy enforcement engine focused on the application itself and abstracting the networking functions underneath. The policy model unites with the advanced hardware capabilities of the Cisco ACI fabric underlying the business-application-focused control system.

The Cisco APIC policy object-oriented model is built on the distributed policy enforcement concepts for intelligent devices enabled by OpFlex and characterized by modern development and operations (DevOps) applications such as Puppet and Chef.

At the top level, the Cisco APIC policy model is built on a series of one or more tenants, which allows the network infrastructure administration and data flows to be segregated. Tenants can be customers, business units, or groups, depending on organization needs. Below tenants, the model provides a series of objects that define the application itself. These objects are endpoints and endpoint groups (EPGs) and the policies that define their relationships (see figure below). The relationship between two endpoints, which might be two virtual machines connected in a three-tier web application, can be implemented by routing traffic between the endpoints to firewalls and ADCs that enforce the appropriate security and quality of service (QoS) policies for the application and those endpoints.

Endpoint Group Policy

Endpoints and Application Workloads Along with Tenants and Application Network Profiles Are the Foundation of the Cisco ACI Policy ModelEndpoints and Application Workloads Along with Tenants and Application Network Profiles Are the Foundation of the Cisco ACI Policy Model

For a more thorough description of the Cisco ACI application policy model, please refer to this whitepaper, or this one more specifically on Endpoint Groups.

For this discussion, the important feature to notice is the way that Cisco ACI policies are applied to application endpoints (physical and virtual workloads) and to EPGs. Configuration of individual network devices is ancillary to the requirements of the application and workloads. Individual devices do not require programmatic control as in prior SDN models, but are orchestrated according to the centrally defined and managed policies and according to application policies.

This model is catching hold in the industry and in the open source community. The OpenStack organization has begun work on including group-based policies to extend the OpenStack Neutron API for network orchestration with a declarative policy-based model based closely on EPG policies from Cisco ACI. (Note: “Declarative” refers to the orchestration model in which control is distributed to intelligent devices based on centralized policies, in contrast to retaining per-flow management control within the controller itself.)

Read More »

Tags: , , , , , , , ,

The Benefits of an Application Policy Language in Cisco ACI: Part 2 – The OpFlex Protocol

October 14, 2014 at 5:00 am PST

[Note: This is the second of a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not.  Part 1 | Part 3 | Part 4]

Following on from the first part of our series, this blog post takes a closer look at some of these architectural components of Cisco ACI and the VMware NSX software overlay solution to quantify the advantages of Cisco’s application-centric policies and demonstrate how the architecture supports greater scale and more robust IT automation.

As called for in the requirements listed in the previous section, Cisco ACI is an open architecture that includes the policy controller and policy repository (Cisco APIC), infrastructure nodes (network devices, virtual switches, network services, etc.) under Cisco APIC control, and a protocol communication between Cisco APIC and the infrastructure. For Cisco ACI, that protocol is OpFlex.

OpFlex was designed with the Cisco ACI policy model and cloud automation objectives in mind, including important features that other SDN protocols could not deliver. OpFlex supports the Cisco ACI approach of separating the application policy from the network and infrastructure, but not the control plane itself. This approach provides the desired centralization of policy management, allowing automation of the entire infrastructure without limiting scalability through a centralized control point or creating a single point of catastrophic failure. Through Cisco ACI and OpFlex, the control engines are distributed, essentially staying with the infrastructure nodes that enforce the policies.

Read More »

Tags: , , , , , , ,