1.VMware pricing model is fundamentally flawed, which is raising OpEx costs, and affecting network design decisions and scale.
VMware is charging customers per-port, per-VM, and increases the cost of networking by 2x or more, while providing lower functionality, increasing operations expense, and forcing you to adopt a different network architecture. ACI delivers more functionality with zero VM tax.
For VMware, our customers consistently report pricing starting at $50 or more per VM per month. In competitive engagements, pricing rapidly declines to $15 per VM per month, then lower depending on the negotiation. Customers do not like the per port pricing, the same as they do not like per VM pricing. All of those models get expensive and alter your designs and scale considerations.
2. Claims that ACI is a proprietary platform or policy model belies the fact that many aspects of VMware’s architecture require vendor lock-in, on top of the premium pricing model.
VMware claims that ACI is proprietary. Yet customers have to get their OVS from VMware not the open switch download, under open source license. Currently, VMware is the only hypervisor platform that locks customers into a proprietary controller -- RedHat, KVM, and Hyper-V all provide open access. ACI contributions are showing up in OpenStack, IETF drafts, and through VXLAN extensions, and is providing the most open implementation in the industry -- API’s, data model, and integration with 3rd party controllers. Federating NSX with 3rd party controllers, such as HP, is different that providing open, bi-directional programmability.
3. Openness is really measured by the breadth of infrastructures, OS platforms, orchestration models, etc., that are supported by the policy model, and ACI is rapidly outdistancing NSX in this area.
ACI supports any hypervisor, any encapsulation (VXLAN, NVGRE, VLAN, and even STT), any physical platform, storage, physical compute, layer 4 through 7, WAN, with full flexibility of any workload anywhere, with full policy, performance, and visibility in hardware. ACI supports Open vSwitch and allows a 3rd party controller to program ACI hardware components. Investment protection is built in supporting existing platforms, and within the Nexus 9000 products enabling you to run enhanced NXOS and ACI mode with a software upgrade.
I have worked in IT since 1995 and never learned programming. Sure, I can do a little HTML, and years ago, I learned just enough Perl to configure MRTG, but I have never written a program. The good old CLI has kept me very busy and brought home the bacon.
Therefore, I have opened an account at codecademy.com. I will start with Python and Java. I see many late nights in my future.
I have thought about learning code, but I could never think of an app I wanted to write. Now Cisco is bringing together networking and programming. Cisco is not only making APIs available, Cisco is contributing code to the open source community. In fact, Cisco has created a Data Center repository, a Nexus 9000 community, and a general Cisco Systems repository on GitHub.
Cisco has recently overhauled the developer program and its content. The new DevNet website is filled with developer information on products such as AVC, Collaboration, UCS, CTI, Energywise, FlexPod, UCS Microsoft Manager, Jabber, onePK, XNC, Telepresence.
Cisco is bringing the networking and programing worlds together and this stubborn old networker is finally onboard.
Bill Carter is a Senior Network Engineer with more than 18 years of experience. He works for Sentinel Technologies and specializes in next-generation data center, campus and WAN network services.
At the heart of the transition to cloud computing is on-demand provisioning of a wide variety of applications, linear scalability of resources, and non-stop operation at lower total cost. With the increasing frequency of rapid provisioning of data-intensive applications in the cloud, organizations are increasingly challenged to better scale and manage network and storage environments without business disruption. This necessitates a network that provides uniform latency, high bandwidth, full utilization of all paths, and configuration simplicity.
The Cisco Nexus® 9508 40GbE data center Ethernet switch was recently tested by Lippis report and turned in remarkable performance results, while supporting 288 40GbE ports for the highest 40GbE port density of any switch tested to date. Cisco Nexus 9508 performed with the best overall store-and-forward latency of core switches tested to date, while providing consistent latency across all packet sizes at line rate. In addition, it demonstrated 100% throughput (i.e. without dropping a single packet!) across all 40GbE ports for a wide range of packet sizes. This is key to public and private cloud providers seeking aggregation and core networking technology that underpins large-scale, highly virtualized data centers and converged storage systems with support for disparate workloads having a wide range of performance requirements.
The industry-leading 40GbE density and performance of Cisco Nexus 9508 enables data center IT to upgrade aggregation network infrastructures from 10GbE to 40GbE to complement the shift in server networking from GbE to 10GbE. Having extremely impressive cross sectional bandwidth and latency numbers, the Cisco Nexus 9508 can also excel for aggregation and core infrastructure applications in traditional, cloud data centers as well as hyper-scale data center environments. Cisco Nexus 9508 is also optimal network infrastructure for high performance cluster computing applications, for example, for large-scale data analytics and low-latency trading applications.
For unicast traffic, Cisco Nexus 9508 delivered store-and-forward latencies ranging from 1.6 microseconds for 64B packets used in transaction workloads to 3.5 microseconds for 9KB packets used in data-intensive, large-file applications. The latency variation ranged between 1 to 3 ns, allowing consistent latency across all packet sizes at line rate. These are by far the lowest latency measurements observed by Lippis Report in core switches to date (the previous record for modular switch latency was 2.2 to 11.9 microseconds, at the same packet range, however at much less density).
For IP multicast traffic, the Cisco Nexus 9508 demonstrated store-and-forward latencies ranging from 1.6 microseconds for 64B packets to 3.5 microseconds (3465.3 ns) for 9KB packet forwarding IP multicast traffic faster than any other core switch observed in Lippis core switch tests.
The Cisco Nexus 9508’s congestion management is excellent at nearly 78% of aggregated forwarding rate as percentage of line rate during congestion conditions for L3 traffic flows, but when considering the density of ports supported and sheer magnitude of the traffic flow, the Cisco Nexus 9508 achieved congestion management at a scale never before attempted.
The Cisco Nexus 9508 also demonstrated 100% throughput as a percentage of line rate across all 288 40GbE ports for unicast traffic. In other words, not a single packet was dropped while the Cisco Nexus® 9508 was presented with enough traffic to populate its highly dense 288 40GbE ports at line rate.
The full report can be found here:
Following are links to webcasts providing highlights of unicast and multicast support of Cisco 9000:
Nexus 9000 Unicast forwarding by Lilian Quan
2013 was the year I started working on SDN -- specifically in the area of devising professional services for Cisco ONE and Application Centric Infrastructure, ACI. A few months ago, I used a compendium to summarize my Cisco Domain TenSM blogs. This was well received, so I thought it would be a good idea to wrap up the year with a summary of my 2013 journey into the SDN world, and in particular the adoption challenges I learned about along the way, some of which are illustrated in the diagram below.
The other week I attended the “Software Defined Networking 2013” conference in London. This is a UK-based event for the discussion of SDN, OpenFlow and Network Virtualisation Solutions from a strategic perspective. There were quite a few interesting perspective s I picked up at this conference. In particular, the conference for me reinforced the potential of SDN – but if you apply it to the wrong problem, you may not get the return you hope for!
Top of mind for me, then, coming out of this conference was a demo of “What SDN Can Do For You” from one of our competitors. At best, the phrase “using a sledge hammer to crack a nut” comes to mind.
The demo came from our friends in Palo Alto, who once (boldly but incorrectly!) predicted that “Cisco UCS would be dead a year after launch”. They gave a SDN-focused demo that, when I “peeled back the onion”, didn’t demonstrate a compelling SDN use case. Rather, it convinced me that if you have this particular problem as illustrated in their demo, you don’t need SDN: you need a new vendor!