On July 18th, Cisco IT turned on the first deployment of Application Centric Infrastructure (ACI) fabric at its engineering data center in San Jose. By using ACI fabric to simplify and flatten the data center network, we can reduce network operating costs as much as 55 percent and incident management roughly 20 percent. Take a peek at the ACI fabric inside our engineering data center: Read More »
I am Soni Jiandani, SVP of Marketing for Cisco’s Insieme Business Unit. Together with a team of veteran leaders and engineers, we continue to disrupt markets to drive industry transformation. Our latest disruption is focused on leapfrogging Software Defined Networks (SDN) with a holistic approach to the future of networking: Application Centric Infrastructure, or ACI for short.
My blog is timed with announcing the shipment of ACI – namely the Application Policy Infrastructure Controller (APIC) with ACI mode for the Nexus 9000. But this is not a corporate sales blog. My intent is to foster an open discussion about the future of the networking industry.
ACI: A key enabler to driving fast IT
We have spent the past few years to gather the best and the brightest engineering minds focused on one simple goal: to design an infrastructure for our customers that meets the needs of applications today and in the future. These applications require dynamic, agile, fast, secure, scalable, reliable infrastructure that is automated as a native, baseline requirement.
Cisco has a broad base of data center customers with a diverse set of requirements and we meet their needs with Nexus -- the most comprehensive switching portfolio in the industry. This week, we are making announcements for both the Nexus 9000 series and the Nexus 3000 series that provide design and deployment flexibility for our commercial, enterprise, service provider, as well as cloud customers. Key points of the announcement include:
- ACI (Application Centric Infrastructure) is shipping this month;
- Additional linecard and chassis options provide customer choice and flexibility;
- 100G linecards for the Nexus 9500 will be available in Q4CY14 and will offer the highest density in the industry; and
- New starter kits and bundles help customers ease transitions.
The Nexus 9000 Series
ACI is shipping this month
The Nexus 9000 series can operate in standard NX-OS mode or in ACI mode. In either case the Nexus 9000 portfolio delivers the value of the “5 P’s” of Power efficiency, Price, Port density, Performance, and Programmability. NX-OS mode provides customers with the value of the NX-OS operating system used by tens of thousands of customers in data centers around the world. ACI mode adds to NX-OS capabilities by providing an application driven policy model, integration of hardware and software, and centralized visibility, among other things. ACI requires a controller and switch software. Both are shipping this month. It is important to note that the pricing for this solution is simple and predictable. There is a perpetual license for each leaf switch. Other pricing approaches in the industry are monthly and are based on varying elements like number of VM’s. Comparing the two approaches is somewhat like comparing a cell phone bill that is either flat rate or usage based. Personally, I like the simplicity and predictability of flat rate. See The Future of Networking, as well as SDN and Beyond for additional details on new ACI announcements and how they can take you beyond SDN.
Additional linecard and chassis options underscore flexibility
We’ll consider how flexibility is delivered for both modular and fixed platforms. For modular switching, the Nexus 9500 modular chassis family offers different line card options that can be mixed in the same chassis and allow customers to “dial up” or “dial down” their design based upon the price, performance, feature set, and scale they want to achieve. There are basically 3 different ‘flavors’, all of which are now shipping:
- The Nexus 9500 X9400 set of 1/10G and 40G line cards are based on merchant silicon and provide industry-leading price and performance compared to other merchant silicon switches. These provide a very cost effective solution ideal for traditional modular data center designs.
- The Nexus 9500 X9500 set of 1/10G and 40G line cards are sometimes referred to as “merchant plus” because they have custom Cisco ASICs, in addition to merchant silicon, and are ideal for customers that need performance together with additional buffering and VXLAN routing capabilities. The X9500 line cards can be used in future ACI designs as well.
- The Nexus 9500 X9600 set of 40G line cards provide performance without compromise even for small packet sizes.
The Nexus 9300 series offers ACI capabilities (ala the X9500 linecards in item 2 above) in a fixed form factor. For customers interested in a merchant only fixed form factor, we offer the Nexus 3000 family. This week, we announced the new Nexus 3164, which provides 64 ports of 40G and is a great solution for 40G access or space constrained aggregation.
We are also announcing 100G linecards that we believe will deliver industry leading port density of up to 128 ports of 100G in a single chassis. 100G for both the X9400 and X9600 series will be available for the Nexus 9500 in Q4CY14. Cisco will offer an 8 port 100G X9400 line card and a 12 port 100G X9600 line card.
New starter kits and bundles ease transitions
There are numerous packages available to ease transitions -- from 1G to 10G, 10G to 40G, or from traditional networks to ACI. There are 2 bundles I want to quickly call out. The first provides a smooth transition for customers with older End of Row Catalyst 6500’s in their data centers. It occupies the same rack space and uses the same cabling as they currently have, but provides 10X the performance. The second is basically an ACI starter kit, providing the APIC, spine switches and leaf switches, even optical cables – everything required to set up and get started with an ACI pod.
In summary, Cisco is continuing its rapid pace of innovation and execution around ACI and data center switching overall. Ultimately, this means customers gain choice, flexibility and true innovation to support their business needs.
Customers gain great value from server virtualization in the form of virtual machines (VM) and more recently Linux Containers /Dockers in data centers, clouds and branches. By some estimates, more than 60 % of the workloads are virtualized although less than 16% of the physical servers (IDC) are virtualized (running a hypervisor). From a networking perspective, the hypervisor virtual switch on these virtualized servers plays a critical component in all current and future data center, cloud, and branch designs and solutions
As we count down to the annual VMworld conference and reflect on the introduction of the Cisco Nexus 1000V in vSphere 4.0 six years ago, we can feel proud of what we have achieved. We have to congratulate VMware for their partnership and success in opening vSphere networking to third party vendors. It was beneficial for our joint customers, and for both companies. VMware and Cisco could be considered visionaries in this sense. Recognizing this success, the industry has followed.
Similarly we praise Microsoft as well, for having also provided an open environment for third-party virtual switches within Hyper-V, which has continued gaining market share recently. Cisco and Microsoft (along with other industry players) are leading the industry with the latest collaboration on submitting the OpFlex control protocol to the IETF. Microsoft’s intention to enable OpFlex support in their native Hyper-V virtual switch enables standards-based interaction with the virtual switches. Another win for customers and the industry.
In KVM and Xen environments, many organizations have looked at Open vSwitch (OVS) as an open source alternative. There is an interest in having richer networking than the standard Linux Bridge provides, or using OVS as a component for implementing SDN-based solutions like network virtualization. We think that there is an appetite for OVS on other hypervisors as well. Cisco is also committed to contributing and improving these open source efforts. We are active contributors in the Open Virtual Switch project and diligently working to open source our OpFlex control protocol implementation for OVS in the OpenDaylight consortium.
To recap on the thoughts from above, Table 1 provides a quick glance at the options for virtual networking from multiple vendors as of today:
Table 1: Hypervisors and Choices in Virtual Switches
3-party or OpenSource vSwitch
•Distributed Virtual Switch
•Cisco Application Virtual Switch
•IBM DVS 5000V
•HP Virtual Switch 5900V
|Native Hyper-v Switching||
|Linux Bridge(some distributions include OVS natively)||
|OVS -- open source project with multiple contributions from different vendors and individuals||
As an IT Professional, whether you are running workloads on Red Hat KVM, Microsoft Hyper-V or VMware vSphere, it is difficult to imagine not having a choice of virtual networking. For many customers, this choice still means using the hypervisor’s native vSwitch. For others, it is about having an open source alternative, like OVS. And in many other cases, having the option of selecting an Enterprise-grade virtual switch has been key to increasing deployments of virtualization, since it enables consistent policies and network operations between virtual machines and bare metal workloads.
As can be seen in the table above, Cisco Nexus 1000V continues to be the industry’s only multi-hypervisor virtual switching solution that delivers enterprise class functionality and features across vSphere, Hyper-V and KVM. Currently, over 10,000 customers have selected this option with Cisco Nexus 1000V in either vSphere, Hyper-V, or KVM (or a combination of them).
Cisco is fully committed to the Nexus 1000V for vSphere, Hyper-V and KVM and also the Application Virtual Switch (AVS) for Application Centric Infrastructure (ACI), in addition to our open source contributions to OVS. Cisco has a large R&D investment in virtual switching, with a lot of talented engineers dedicated to this area, inclusive of those working on open-source contributions.
Nexus 1000V 3.0 release for vSphere is slated for August 2014 (general availability). This release addresses scale requirements of our increasing customer base, as well as an easy installation tool in the form of Cisco Virtual Switch Update Manager. The Cisco AVS for vSphere will bring the ACI policy framework to virtual servers. With ACI, customers will for the first time benefit from a true end-to-end virtual + physical infrastructure being managed holistically to provide visibility and optimal performance for heterogeneous hypervisors and workloads (virtual or physical). These innovations and choices are enabled by the availability of open choices in virtual switching within hypervisors.
As we look forward to VMworld next month, we are excited to continue the collaborative work with platform vendors VMware, Microsoft, Red Hat, Canonical, and the open source community to maintain and continue development of openness and choice for our customers. We are fully committed to this vision at Cisco.
Acknowledgement: Juan Lage (@juanlage) contributed to this blog.
Tags: application centric infrastructure, Application Virtual Switch, AVS, Canonical, KVM, Microsoft Hyper-V, Nexus1000V, open source, opendaylight, OpFlex, opflex protocol, OVS, RedHat, VMware vSphere, vmworld, vmworld 2014
Last week I spent some time at the “Software Defined Networking 2014” conference in London. It’s a relatively small conference I would say however given the growing interest in SDN and rapid progress of the technology it’s always good to hear alternative viewpoints and experiences. And I certainly found the previous conference here in December 2013 interesting -- in particular one vendor in my view using SDN as the “hammer to crack a nut“.
Cisco wasn’t present at this conference last week, so what are others saying about SDN? Here is a quick summary of my takeaways (in some cases questions raised in my mind), which I will expand on below. And let me be controversial in my summary!
(1) Negligible discussion on live SDN deployments.
(2) NFV -- at least for service providers -- is potentially a quicker win than SDN
(3) SDN “Washing” is alive and well
(4) Is OpenFlow more of an academic pursuit?
(5) Open Daylight excitement
(6) Negligible Discussion on “Making It Happen”
As I say, to some my statements may be controversial -- let me explain!