Cisco Blogs

Cisco Blog > Data Center

Rebuttal to VMware Comments on ACI and SDN Architectures

In a January 13, 2014 NetworkWorld article, VMware executive, Steve Mullaney, compared VMware NSX to Cisco ACI and other SDN architectures. Cisco’s Frank D’Agostino replied to those assertions in the comments section of the article. Frank’s points are abbreviated and summarized here:

1. VMware pricing model is fundamentally flawed, which is raising OpEx costs, and affecting network design decisions and scale.

VMware is charging customers per-port, per-VM, and increases the cost of networking by 2x or more, while providing lower functionality, increasing operations expense, and forcing you to adopt a different network architecture. ACI delivers more functionality with zero VM tax.

For VMware, our customers consistently report pricing starting at $50 or more per VM per month. In competitive engagements, pricing rapidly declines to $15 per VM per month, then lower depending on the negotiation. Customers do not like the per port pricing, the same as they do not like per VM pricing. All of those models get expensive and alter your designs and scale considerations.

2. Claims that ACI is a proprietary platform or policy model belies the fact that many aspects of VMware’s architecture require vendor lock-in, on top of the premium pricing model.

VMware claims that ACI is proprietary. Yet customers have to get their OVS from VMware not the open switch download, under open source license. Currently, VMware is the only hypervisor platform that locks customers into a proprietary controller – RedHat, KVM, and Hyper-V all provide open access. ACI contributions are showing up in OpenStack, IETF drafts, and through VXLAN extensions, and is providing the most open implementation in the industry – API’s, data model, and integration with 3rd party controllers. Federating NSX with 3rd party controllers, such as HP, is different that providing open, bi-directional programmability.

3. Openness is really measured by the breadth of infrastructures, OS platforms, orchestration models, etc., that are supported by the policy model, and ACI is rapidly outdistancing NSX in this area.

ACI supports any hypervisor, any encapsulation (VXLAN, NVGRE, VLAN, and even STT), any physical platform, storage, physical compute, layer 4 through 7, WAN, with full flexibility of any workload anywhere, with full policy, performance, and visibility in hardware. ACI supports Open vSwitch and allows a 3rd party controller to program ACI hardware components.  Investment protection is built in supporting existing platforms, and within the Nexus 9000 products enabling you to run enhanced NXOS and ACI mode with a software upgrade.

4. NSX has not built out any proven, scalable deployments, and will ultimately reach the same limits that all other software emulations running on underlying hardware run into.

Any company with large scale production experience understands that you do not rack and stack and walk away. This reflects a mentality indicating limited large scale production implementation experience.

Many early, first generation software-only LAN Emulation customers lead the Cisco ACI customer advisory board. LAN Emulation is a reference to ATM LAN Emulation which was the industry’s first attempt to faithfully reproduce an Ethernet physical network on top of another media. This sort of emulation technology has been tried and failed. The industry is still waiting for the public performance tests for unicast, multicast, and large layer 2 domains deployed in NSX.

VMware is saying that the “tornado” (large scale production deployments) will happen in 2015. They are hoping to hide these scalability/deployment problems until then, while we are seeing large scale deployments this year with ACI.

5. The NSX abstraction model is doomed to reduce visibility, requires separate management of two infrastructures, the virtual and physical, as well as management of individual nodes rather than the holistic fabric.

VMware claims NSX creates an illusion of a fully functional network with all intelligence removed. The illusion is maintained until you have to troubleshoot a real-time performance problem, where you can only look into the ends of the tunnel. A software defined network like ACI is enriched by providing the flexibility of the software overlay combined with performance and visibility in hardware. Open, flexibility, visibility – in real time, hop by hop, for every application delivery platform, and with consistent policy and mobility.

Why should customers reproduce the problems of a legacy network in software, just to turn on a logical port and create a tunnel in software? In addition, they are implementing a second network to deliver applications which operates in a ships-in-the-night mode, with no coordination between logical and physical. The networking cost is a shift with NSX, not a cost savings.

Looking into the end of a tunnel is not real-time multi-tenant visibility and traffic engineering. Hashing and queuing over multiple paths in the network, with end points of the application distributed over all network edges, bare metal compute, all hypervisors, storage, WAN and hybrid cloud, does not fit one hypervisor implementation. NSX fails operations when customers ask why or where there is a problem. By comparison, in ACI all traffic is optimally forwarded. Layer 2 and Layer 3 forwarding is consistent for East-West and North-South traffic. There is no relaying traffic through x86 boxes doing LAN Emulation.

6. The NSX approach has adverse impact on L4-7 services, which may not have visibility to the overlay tunnel, and may have challenges in chaining traffic to the right services per application policy. The ACI fabric doesn’t have these problems, and is incorporating a wide range of third party devices into the ACI model, like Citrix, A10, Palo Alto Networks, and more, leveraging the open ACI policy model…

Layer 4 – 7 services are not limited to VM-based implementations. Customers have many choices in appliance architectures and creating a logical pipeline that includes security, load balancing, and other appliances should be implemented consistently despite the appliance or hypervisor. This is done at scale, with real time performance, visibility, scale, and mobility – with greater functionality and scale than software only LAN Emulation.

Layer 4 – 7 services need to be open to allow any appliance vendor, physical and virtual, consistent access for insertion into the logical pipeline. This is done with a broad ecosystem, integrating open API’s from any vendor, leveraging open device packages. This is done with ACI.

Tags: , , , ,

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. "ACI supports ...- any physical platform" Ahm, I can run ACI on top of any network HW and get the full functionality?
  2. kilbop - Any transition in platforms presents opportunities for new technologies at both the control plane and the data plane. Many existing boxes do not support VXLAN (from any vendor) and features such as those will vary at the data plane level. Configuration objects northbound, southbound programming, and how addressing and forwarding are resolved are included in the ACI architecture. Contrast that to software only overlays which require you to have multiple strategies per hypervisor, physical, WAN, compute, storage, and now moving in to the campus. The architecture for ACI covers these areas, including investment protection for existing platforms.
  3. Interesting post, thank you, I am trying to be unbiased in these comments: Regarding item #4 “Many early, first generation software-only LAN Emulation customers lead the Cisco ACI customer advisory board. LAN Emulation is a reference to ATM LAN Emulation which was the industry’s first attempt to faithfully reproduce an Ethernet physical network on top of another media. This sort of emulation technology has been tried and failed. The industry is still waiting for the public performance tests for unicast, multicast, and large layer 2 domains deployed in NSX.” The problem with the above statement is the fact that technology at the time of ATM/LANE was not as advanced as today. Memory and CPUs did not have the same power as today. The basic concepts failed because of scalability of LANE (Ethernet’s broadcast domain) and many other issues (Cost/Economics - the cost of an Ethernet NIC). The approach was doomed from the very beginning. The NSX approach is native to the underlying protocols in the network. The scalability issues most likely can be overcome with today’s hardware technology/and future software technology. Servers/PCs/Compute platforms (and their underlying CPUs/Memory) are still following Moore’s law. Yes, software may be another matter, but it seems advancements are moving along. Item #5 I agree that you cannot abstract the underlying connectivity away. This is a fire and blame method, VMware bias of Servers/Software vs. the Holistic Approach (which I think is the correct view) is evident here. One major DC firestorm caused by VMware NSX and its back to the methods which are true and tried, old and tried methods which work will win over…hey, let’s burn down the barn and see if that causes us to milk the cows faster, the cows will not be too happy about that, you may not be able to survive the stampede out of the barn! Item #6 I agree that it’s a much more complicated world out there, and an open layer 4-7 ecosystem is the correct approach. One vendor does not have all the answers, its complicated world out there