In a January 13, 2014 NetworkWorld article, VMware executive, Steve Mullaney, compared VMware NSX to Cisco ACI and other SDN architectures. Cisco’s Frank D’Agostino replied to those assertions in the comments section of the article. Frank’s points are abbreviated and summarized here:
1. VMware pricing model is fundamentally flawed, which is raising OpEx costs, and affecting network design decisions and scale.
VMware is charging customers per-port, per-VM, and increases the cost of networking by 2x or more, while providing lower functionality, increasing operations expense, and forcing you to adopt a different network architecture. ACI delivers more functionality with zero VM tax.
For VMware, our customers consistently report pricing starting at $50 or more per VM per month. In competitive engagements, pricing rapidly declines to $15 per VM per month, then lower depending on the negotiation. Customers do not like the per port pricing, the same as they do not like per VM pricing. All of those models get expensive and alter your designs and scale considerations.
2. Claims that ACI is a proprietary platform or policy model belies the fact that many aspects of VMware’s architecture require vendor lock-in, on top of the premium pricing model.
VMware claims that ACI is proprietary. Yet customers have to get their OVS from VMware not the open switch download, under open source license. Currently, VMware is the only hypervisor platform that locks customers into a proprietary controller -- RedHat, KVM, and Hyper-V all provide open access. ACI contributions are showing up in OpenStack, IETF drafts, and through VXLAN extensions, and is providing the most open implementation in the industry -- API’s, data model, and integration with 3rd party controllers. Federating NSX with 3rd party controllers, such as HP, is different that providing open, bi-directional programmability.
3. Openness is really measured by the breadth of infrastructures, OS platforms, orchestration models, etc., that are supported by the policy model, and ACI is rapidly outdistancing NSX in this area.
ACI supports any hypervisor, any encapsulation (VXLAN, NVGRE, VLAN, and even STT), any physical platform, storage, physical compute, layer 4 through 7, WAN, with full flexibility of any workload anywhere, with full policy, performance, and visibility in hardware. ACI supports Open vSwitch and allows a 3rd party controller to program ACI hardware components. Investment protection is built in supporting existing platforms, and within the Nexus 9000 products enabling you to run enhanced NXOS and ACI mode with a software upgrade.
Read More »
Tags: ACI, application centric infrastructure, NSX, SDN, software defined networking
More and more enterprises are managing distributed infrastructures and applications that need to share data. This data sharing can be viewed as data flows that connect (and flow through) multiple applications. Applications are partly managed on-premise, and partly in (multiple) off-premise clouds. Cloud infrastructures need to elastically scale over multiple data centers and software defined networking (SDN) is providing more network flexibility and dynamism. With the advent of the Internet of Things (IoT) the need to share data between applications, sensors, infrastructure and people (specifically on the edge) will only increase. This raises fundamental questions on how we develop scalable distributed systems: How to manage the flow of events (data flows)? How to facilitate a frictionless integration of new components into the distributed systems and the various data flows in a scalable manner? What primitives do we need, to support the variety of protocols? A term that is often mentioned within this context is Reactive Programming, a programming paradigm focusing on data flows and the automated propagation of change. The reactive programming trend is partly fueled by event driven architectures and standards such as for example XMPP, RabbitMQ, MQTT, DDS.
One way to think about distributed systems (complementary to the reactive programming paradigm) is through the concept of a shared (distributed) data fabric (akin to the shared memory model concept). An example of such a shared data fabric is Tuple spaces, developed in the 1980’s. You can view the data fabric as a collection of (distributed) nodes that provides a uniform data layer to the applications. The data fabric would be a basic building block, on which you can build for example a messaging service by having applications (consumers) putting data in the fabric, and other applications (subscribers) getting the data from the fabric. Similarly such a data fabric can function as a cache, where a producer (for example a database) would put data into the fabric but associates this to a certain policy (e.g. remove after 1 hour, or remove if exceeding certain storage conditions). The concept of a data fabric enables applications to be developed and deployed independently from each other (zero-knowledge) as they only communicate via the data fabric publishing and subscribing to messages in an asynchronous and data driven way.
The goal of the fabric is to offer an infrastructure platform to develop and connect applications without applications having to (independently) implement sets of basic primitives like security, guaranteed delivery, routing of messages, data consistency, availability, etc… and free up time of the developer to focus on the core functionality of the application. This implies that the distributed data fabric is not only a simple data store or messaging bus, but has a set of primitives to support easier and more agile application development.
Such a fabric should be deployable on servers and other devices like for example routers and switches (potentially building on top of a Fog infrastructure). The fabric should be distributed and scalable: adding new nodes should re-balance the fabric. The fabric can span multiple storage media (in-memory, flash, SSD, HDD, …). Storage is transparent to the application (developer), and applications should be able to determine (as a policy) what level of storage they require for certain data. Policies are a fundamental aspect of the data fabric. Some other examples of policies are: (1) time (length) data should remain in the fabric, (2) what type of applications can access particular data in the fabric (security), (3) data locality, the fabric is distributed, but sometimes we know in advance that data produced by one application will be consumed by another that is relative close to the producer.
It is unlikely that there will be one protocol or transportation layer for all applications and infrastructures. The data fabric should therefore be capable to support multiple protocols and transportation layers, and support mappings of well-known data store standards (such as object-relational mapping)
The data fabric can be queried, to enable discovery and correlation of data by applications, and support widely used processing paradigms, such as map-reduce enabling applications to bring processing to the data nodes.
It is unrealistic to assume that there will be one data fabric. Instead there will be multiple data fabrics managed by multiple companies and entities (similar to the network). Data fabrics should therefore be connected with each other through gateways creating a “fabric of fabrics” were needed.
This distributed data fabric can be viewed as a set interconnected nodes. For large data fabrics (many nodes) it will not be possible to connect each node with all other nodes without sacrificing performance or scalability, instead a connection overlay and smart routing algorithms are needed (for example a distributed hash tables) to ensure scalability and performance of this distributed data fabric. The data fabric can be further optimized by coupling this fabric (and its logical connection overlay) to the underlying (virtual) network infrastructure and exploit this knowledge to further optimize the data fabric to power IoT, Cloud and SDN infrastructures.
Special thanks to Gary Berger and Roque Gagliano for their discussions and insights on this subject.
Tags: application centric infrastructure, cloud, Corporate Technology Group, CTG, data fabric, distributed systems, IoT, SDN
The other week I attended the “Software Defined Networking 2013” conference in London. This is a UK-based event for the discussion of SDN, OpenFlow and Network Virtualisation Solutions from a strategic perspective. There were quite a few interesting perspective s I picked up at this conference. In particular, the conference for me reinforced the potential of SDN – but if you apply it to the wrong problem, you may not get the return you hope for!
Top of mind for me, then, coming out of this conference was a demo of “What SDN Can Do For You” from one of our competitors. At best, the phrase “using a sledge hammer to crack a nut” comes to mind.
The demo came from our friends in Palo Alto, who once (boldly but incorrectly!) predicted that “Cisco UCS would be dead a year after launch”. They gave a SDN-focused demo that, when I “peeled back the onion”, didn’t demonstrate a compelling SDN use case. Rather, it convinced me that if you have this particular problem as illustrated in their demo, you don’t need SDN: you need a new vendor!
Tags: ACI, application centric infrastructure, architectural approach, Cisco collaboration, Cisco Services, Cisco WebEx, jabber, Jabber Video, network virtualization, onePK, SDN, software defined network
Last week I was in London for the Gartner Data Center Conference. As always there was a wide range of interesting topics being discussed, all very useful. Working in Cisco Data Center Services, I am interested in many data center topics, however this year I was interested to hear perspectives on SDN, how the market is evolving, and how the attendees -- including many senior IT practitioners -- are considering SDN adoption.
London’s Big Ben at Night
From a Cisco perspective, we were showcasing the recently launched Application Centric Infrastructure (ACI), which generated a lot of interest. There is growing awareness among our customers that ACI could do for networks and applications what the Cisco UCS has done for the server market (with UCS server profiles in the latter proving a good analogy to help customers understand the potential of ACI).
So what were some of my key takeaways from the SDN discussion I heard here? And what were the questions that in my view are still not being discussed sufficiently across the industry?
Read More »
Tags: ACI, application centric infrastructure, Cisco Data Center, Cisco SDN Controller, Cisco Services, SDN, software defined networking
To read the first part of the Network Matters blog series that discusses how an architectural approach to mobility is essential for the Future of Mobility, click here. To read the second part of the series that focuses on how IT leaders can rely on a network to simplify the process of onboarding new mobile technology, click here. For the third part of this series that discusses how Service Providers can deepen their enterprise customer relationships by addressing pain points and meeting new enterprise mobility challenges, click here.
In the new mobile and cloud era, applications are evolving and changing the role of networking at a rapid pace.
In this final blog post of the Network Matters series, I’ll discuss how mobility is driving an application economy that is enabled by intelligent networks.
Read More »
Tags: application centric infrastructure, Cisco, cloud, future of mobility, mobility, unified communications, wireless