Following our launch of the Cisco Application Centric Infrastructure (ACI), we continue with our series exploring in more detail key aspects of the ACI policy model and partner ecosystem. In Part 1 of my series on ACI, we looked at why application policies were an ideal model to build infrastructure automation around, and how application policies are better suited to mirror business objectives and requirements than traditional IT infrastructure policies. The key benefits for customers end up being vastly greater degrees of automation, process improvement and business agility.
In Part 2, we looked into one example of the difficulty in deploying and managing applications and the level of complexity that must be overcome to truly automate application-oriented tasks: application-specific network services and security policies (as well as a separate post on the partner ecosystem for application services and security solutions that support the ACI model).
In this Part 3, we’ll look at one of the components of the ACI fabric that we also announced, the Application Virtual Switch (AVS). We’ve received a number of follow-on questions in this area that can be addressed here. By way of introduction, I had the chance to sit down with AVS and Nexus 1000V Director of Product Management, Balaji Sivasubramanian to talk about the new AVS and how it relates to both ACI and the Nexus 1000V virtual switch (Balaji also had a related post on AVS):
https://www.youtube.com/watch?v=VbUho9Kdnxs
But wait, there’s more…
As we have highlighted, ACI was designed from the ground up (maybe the ASIC up?) to be ideal for both physical and virtual workloads and services. While more and more mission critical applications are being virtualized, there are a significant percentage of applications that may never be virtualized and new classes of applications like Big Data that are increasingly bare metal.
Further demonstrating the complexity of heterogeneous data center application environments, quoting Soni Jiandini’s stats from the video at the end of the first blog in this series:
- Only 15% of servers are virtualized today (with multiple VM’s per virtual server, there are still >50% of all workloads virtualized, however)
- 42% of customers are running multi-hypervisor environments
- 60% of workloads will be cloud-based by 2016
The important point is that organizations need a fabric that is optimally designed for both physical and virtual workloads. There is no dependency within the ACI fabric to program policies whether a virtual switch is connected to a virtual workload or a physical workload is connected to a physical top of rack switch or leaf node.
For virtual workloads, ACI supports hypervisor-resident virtual access switches connecting applications to virtual ports. Among other things, this virtual switch has to provide local switching capability and policy enforcement on east-west traffic between workloads on the same server. That’s where the Application Virtual Switch (AVS) comes in. It is one of the virtual switches supported within ACI and the only one purpose-built with ACI in mind with support for the full ACI policy model, service chaining technology, etc. While ACI is generally agnostic to the virtual switch, the extent to which ACI features are fully realized is dependent on the virtual switch implementation.
Specific features in AVS would include integration with the ACI policy model and management infrastructure, and support for the data forwarding and scalability features in the new application-oriented fabric. AVS will generally be available with the availability of the Application Policy Infrastructure Controller (APIC) to support the ACI fabric, controller and policy model. And as we have stated, this ACI virtual network edge and AVS will support multiple hypervisors, including VMware and Microsoft, allowing for greater flexibility for the large percentage of customers as noted above that are already running multi-hypervisor environments.
But this has raised a few questions about the status of our existing virtual switch, Nexus 1000V and how it relates to AVS. As noted in the video featured above, AVS is based on Nexus 1000V with support for the APIC controller rather than relying on the Virtual Supervisor Module (VSM), which is the control plane for Nexus 1000V. AVS otherwise provides feature and management consistency with other fabrics that use the Nexus 1000V.
Nexus 1000V will work with the new Nexus 9000 series switches in standalone mode (i.e., without the ACI policy model or the APIC controller), since it does not provide feature support for ACI like AVS does. And we will continue development and innovation on Nexus 1000V for non-ACI fabrics, such as DFA or traditional three-tier data center networks. Nexus 1000V, by the way, continues to gain traction in the market with over 8000 customers, many of which are looking for feature and management consistency with their physical networks, with network policy controls owned by the networking team rather than the server team who deploys the hypervisor-resident switch.
In fact, for customers that are using Nexus 1000V today, or plan to do so in the future, we will be offering a Cisco Technology Migration Program (CTMP) to AVS when customers want to migrate to Nexus 9000 and ACI, so existing and future investments in Nexus 1000V will be protected.
Hello Gary, perhaps you haven’t read the questions that I submitted on Balaji Sivasubramanian’s post, so I’ll share them here for your consideration.
I’d like to know more about the path that Cisco pursued to evolve towards an “application aware” architecture. This back-story (how Cisco arrived at this juncture) would be very helpful to industry analysts, customers and institutional investors. Here’s some of the key questions on my mind.
– What were the primary roadblocks that inhibited the adoption of this innovative approach in the past?
– A purpose-built hardware solution seems to be the road less traveled, because it requires a greater R&D investment. Why did Cisco take this approach, and decide against using one of the alternatives?
– What legacy design challenges did Cisco have to overcome, before it could attain the advantages of the ACI fabric orchestration model?
– My follow-on question is about the status of the “Cisco One” initiative — How is the developer ecosystem evolving since the announcement, and how many of the planned APIs are actually available?
I would appreciate it if you could share these details in a follow-on blog post. It would help to fully understand why Cisco specifically chose a solution deployment methodology that’s somewhat different from competitors. Thanking you in advance for your consideration.
David,
Note that the answers to your questions are now posted here.