Cisco Blogs


Cisco Blog > Data Center and Cloud

The Top Five Myths about Cisco ACI

October 22, 2014 at 5:30 am PST

It’s been almost a year since Cisco publicly unveiled its Application Centric Infrastructure (ACI). As we’ve noted in the past, ACI had to overcome a number of preconceived notions about Software Defined Networking (SDN), and without some detailed explanation, it was hard to get your head around how ACI worked and how it related to SDN. As we continue to clarify the message, there are still a number of ACI myths running around out there that we have to spend a good amount of time dispelling, so I thought I’d summarize them here. (Like Centralized Policy Management, Centralized Myth Handling can lead to greater efficiency and increased compliance. :-)).

1.      MYTH:  Cisco has limited software expertise and can’t deliver a true SDN solution because ACI requires Cisco switches (hardware) as well as the APIC controller (software).
REALITY: Cisco believes data centers require a solution that combines the flexibility of software with the performance and scalability of hardware. ACI is the first data center and cloud solution to offer full visibility and integrated management of both physical and virtual networked IT resources, all built around the policy requirements of the application. ACI delivers SDN, but goes well beyond it to also deliver policy-based automation.

2.      MYTH: ACI requires an expensive “forklift upgrade” Cisco customers must replace their existing Nexus switches with new ACI-capable switches.
REALITY: ACI is actually quite affordable due to the licensing model we use and because customers can extend ACI policy management to their entire data center by implementing a “pod” with a cost-effective ACI starter kit. On July 29, Cisco announced four ACI starter kits which are cost effective bundles that are ideal for proof-of-concept and lab deployments, and to create an ACI central policy “appliance” for existing Cisco Nexus 2000-7000 infrastructure to scale out private clouds using ACI. Customers who compare ACI to SDN software-only solutions discover that operational costs, roughly 75 percent of overall IT costs, are substantially lower with ACI — so the total cost of ownership is compelling. Along with the fact that the existing network infrastructure can still be leveraged.

3.      MYTH:  The ACI solution is not open; Cisco doesn’t do enough with the open community. 
REALITY: Openness is a core tenet in ACI design. We see openness in three dimensions: open source, open standards, and open APIs. This naturally fosters an open ecosystem as well. Several partners like F5 and Citrix already are shipping device packs for joint deployments. Customers experience tremendous benefits when vendors come together to provide tightly integrated solutions engineered to work together out of the box.

ACI is designed to operate in heterogeneous data center environments with multiple vendors and multiple hypervisors. ACI supports an open ecosystem covering a broad range of Layer 4-7 services, orchestration platforms, and automation tools. One of the key drivers behind this ecosystem is OpFlex, an open standards initiative that helps customers achieve an intelligent, multivendor, policy-enabled infrastructure. Additionally, through contributions to OpenStack Neutron with our Group-Based Policy model, we are offering a fully open source policy API available to any OpenStack user. Cisco is also working with open source Linux vendors like Red Hat and Canonical to distribute an ACI Opflex agent for OVS, and contributing the Group-based Policy model to Open Daylight.

4.      MYTH: Customers want SDN solutions for their data center networks, but ACI is not an SDN solution.
REALITY:  We believe that SDN or even software defined data centers are not the sole results customers are looking for – it is the policy-based management and automation provided by ACI that delivers tremendous benefits to application deployment and troubleshooting– and provides a compelling TCO by cutting operational costs.  Channel partners agree with us:  a recent study by Baird Equity Research surveyed 60 channel solution providers and found that they would recommend the Nexus 9000 portfolio and ACI to their customers.

5.      MYTH:  Cisco can’t compete against cheap commodity “white box” switches – they are the future of data center networks.  
REALITY:  The truth is that only a handful of companies can effectively deploy white boxes because they require a great deal of operational management and troubleshooting, which is more expensive than the upfront costs of non-commodity hardware. Deutsche Bank published a report last year titled “Whitebox Switches Are Not Exactly a Bargain” which explains how the total cost equation changes when you take into account operational costs. In addition, white boxes don’t include the rich features and capabilities that most companies want. Channel solution providers know this very well. The same Baird Equity Research study of 60 channel solution providers cited above indicated that only 2% would recommend NSX running on white-box or non-Cisco networking gear.

In the data center, “one size does not fit all”, so Cisco offers a variety of switch configurations to match customer needs. For example, customers can start with merchant silicon-based line cards and migrate to an ACI environment with ACI-capable line cards and APIC, if and when they wish.

BOTTOM LINE:  We believe that Cisco will continue to win with our partners in the data center by delivering innovation through a highly secure and application centric infrastructure.  Through training, support, and new certifications, we are empowering over two million networking engineers and thousands of channel partners worldwide to succeed with ACI in the data center and cloud.

Tags: , , , , , , ,

The Benefits of an Application Policy Language in Cisco ACI: Part 4 – Application Policies for DevOps

October 21, 2014 at 5:00 am PST

[Note: This is the last installment of a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not.  Part 1 | Part 2 | Part 3]

As noted earlier in this series, modern DevOps applications such as Puppet, Chef, and CFEngine have already moved toward the declarative model of IT automation, so there is already some obvious synergy between DevOps and the Cisco ACI policy model. DevOps automation products are also optimizing application delivery processes and are designed to automate critical IT tasks to make the organization more agile and efficient.

In an early 2014 blog post, Andi Mann, vice president of strategic solutions at CA Technologies, wrote about the evolution to DevOps and the synergy with the Cisco ACI policy model:

Though the DevOps approach of today—with its notable improvements to culture, process, and tools—certainly delivers many efficiencies, automation and orchestration of hardware infrastructure has still been limited by traditional data center devices, such as servers, network switches and storage devices. Adding a virtualization layer to server, network, and storage, IT was able to divide some of these infrastructure devices, and enable a bit more fluidity in compute resourcing, but this still comes with manual steps or custom scripting to prepare the end-to-end application infrastructure and its networking needs used in a DevOps approach.

The drag created by these traditional application infrastructures has been somewhat reduced by giving that problem to cloud providers, but in reality this drag never really went away until Cisco innovated application-centric programmability with Cisco ACI. This innovative new solution is now poised to greatly benefit the whole application economy, especially management of the DevOps application environment…

Read More »

Tags: , , , , , , , , ,

SP Network Transformation at the Cisco Live Cancun World of Solutions

October 21, 2014 at 1:30 am PST

By Igor Dayen, Manager, SP Product and Solutions Marketing  Igor-Dayen

The excitement starts on November 3rd in Cancun, Mexico where Cisco is holding our next Cisco Live event. A great opportunity for the service provider community to study with industry experts, get inspired, and understand how Cisco’s Open Network strategy can fast track their growth.  In the World of Solutions the SP booth is hosting numerous demos and live equipment which tell the story of how Cisco is helping carriers address their business requirements.

Cisco Live is known for the extensive number of breakout technical sessions, and Cancun will be no different. Hot topics such as NFV and SDN will be extensively covered. Read More »

Tags: , , , , , , , , , , , ,

Optimize Your Software-Defined Network by Hardware Requirements

Software-based techniques are transforming networking. Commercial off-the-shelf hardware is finding a place in several networking use cases. However, high-performance hardware is also an important part of a successful software-defined networking (SDN). As you optimize your networks using SDN tools and complementary technologies such as network function virtualization (NFV), an important step is to strategically assess your hardware needs based on the functions and performance requirements. These need to be aligned with your intended business outcome for individual applications and services.

Two Categories of High Performance Hardware

  • Network hardware that utilizes purpose-built designs. These often involve specialized Application-Specific Integrated Circuit (ASIC)s to achieve significantly higher performance than what is possible or economically feasible using commercial off-the-shelf servers that are based on state of the art, x86-based, general purpose processors.
  • Network hardware that uses standard x86 servers that is enhanced to provide high performance and predictable operation for example, via special software techniques that bypass hypervisors, virtualization environments, and operating systems.

Where to Deploy Network Functions
Can virtualized network functions be deployed like cloud-based applications? No. There is a big difference between deploying network functions as software modules on x86 general purpose servers and using a common cloud computing model to implement network virtualization. Simply migrating existing network functions to general purpose servers without due regard to all the network requirements leads to dramatically uneven and unpredictable performance. This unpredictability is mainly due to data plane workloads being often I/O bound and/or memory bound and software layers containing important configuration details that may impact performance.
These issues are not specifically about hardware but how the software handles the whole environment. Operating systems, hypervisors, and other infrastructure that is not integrated into best practices for data plane applications will continue to contribute to unpredictable performance.

Bandwidth and CPU Needs

Optimization 10.20

A good way to begin to assess hardware requirements is to examine network functions in two dimensions: I/O bandwidth or throughput needs, and computational power needs. In considering which network function to virtualize and where to virtualize it, CPU load required and bandwidth load required throughout different layers of the network can help determine that some but not all network functions are suitable for virtualization.

Applications with lower I/O bandwidth and low-to-high CPU requirements may be most appropriate for virtualized deployment on optimized x86 servers. Applications with higher I/O bandwidth and low-to-high CPU requirements may be best deployed on specialized high-performance hardware with specialized silicon. Many other factors may play a role in determining what hardware to use for which applications, including cost, user experience, latency, networking performance, network predictability, and architectural preferences.
Service-Network Abstraction is Key
Additionally, you might not need high performance hardware for certain functions initially. But as such a particular function scales, it might require a high performance platform to meet its performance specifications, or it might be more economical on a purpose-built platform. So you might start out with commercial off-the-shelf hardware and then transfer the workload to the high performance hardware later. If you have focused on establishing a clean abstraction of the services from the underlying hardware infrastructure using SDN principles, the network deployment can be more easily changed or evolved independently of the upper services and applications. This is the true promise of SDN.
Read more about how to assess hardware performance requirements in your SDN in the Cisco® white paper “High-Performance Hardware: Enhance Its Use in Software-Defined Networking.” You can find it here: “Do You Know your Hardware Needs?” along with other useful information.

Do you have questions or comments? Tweet us at @CiscoSP360

Tags: , , , , , , , ,

The Benefits of an Application Policy Language in Cisco ACI: Part 3 – Group Policies

October 17, 2014 at 5:00 am PST

[Note: This is the third a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not. Part 1 | Part 2 | Part 4]

The Cisco ACI fabric is designed as an application-centric intelligent network. The Cisco APIC policy model is defined from the top down as a policy enforcement engine focused on the application itself and abstracting the networking functions underneath. The policy model unites with the advanced hardware capabilities of the Cisco ACI fabric underlying the business-application-focused control system.

The Cisco APIC policy object-oriented model is built on the distributed policy enforcement concepts for intelligent devices enabled by OpFlex and characterized by modern development and operations (DevOps) applications such as Puppet and Chef.

At the top level, the Cisco APIC policy model is built on a series of one or more tenants, which allows the network infrastructure administration and data flows to be segregated. Tenants can be customers, business units, or groups, depending on organization needs. Below tenants, the model provides a series of objects that define the application itself. These objects are endpoints and endpoint groups (EPGs) and the policies that define their relationships (see figure below). The relationship between two endpoints, which might be two virtual machines connected in a three-tier web application, can be implemented by routing traffic between the endpoints to firewalls and ADCs that enforce the appropriate security and quality of service (QoS) policies for the application and those endpoints.

Endpoint Group Policy

Endpoints and Application Workloads Along with Tenants and Application Network Profiles Are the Foundation of the Cisco ACI Policy ModelEndpoints and Application Workloads Along with Tenants and Application Network Profiles Are the Foundation of the Cisco ACI Policy Model

For a more thorough description of the Cisco ACI application policy model, please refer to this whitepaper, or this one more specifically on Endpoint Groups.

For this discussion, the important feature to notice is the way that Cisco ACI policies are applied to application endpoints (physical and virtual workloads) and to EPGs. Configuration of individual network devices is ancillary to the requirements of the application and workloads. Individual devices do not require programmatic control as in prior SDN models, but are orchestrated according to the centrally defined and managed policies and according to application policies.

This model is catching hold in the industry and in the open source community. The OpenStack organization has begun work on including group-based policies to extend the OpenStack Neutron API for network orchestration with a declarative policy-based model based closely on EPG policies from Cisco ACI. (Note: “Declarative” refers to the orchestration model in which control is distributed to intelligent devices based on centralized policies, in contrast to retaining per-flow management control within the controller itself.)

Read More »

Tags: , , , , , , , ,