As IT departments move to private cloud offerings, DevOps methodologies, and continuous integration capabilities, many segments of the data center market have a strong need for more open, programmable, and application-led networks. In these fully automated environments, network automation for infrastructure as a service (IaaS) or applications on demand is becoming essential. As discussed in a recent blog postby Ravi Balakrishnan, the Cisco Nexus 9000 offers the industry’s 1st open and extensible application policy model helping businesses increase agility, flexibility, and scalability and automate repetitive manual tasks, reducing the time to deployment and easing maintenance tasks.
A recently-issued Lippis Report provides validation that the Cisco Nexus 9000 product line offers the most comprehensive open programming tools and functions available that can either be leveraged independently, or put to work in unison with other platform capabilities. The report found that the benefits of Cisco Nexus 9000 programming environment include investment protection and improved business agility through support of open protocols, APIs and standards that leverage customers’ existing networking, services including security, physical and virtual compute, and storage assets and accelerate network application deployment times to minutes improving business agility through centralized management.
Cisco 9000 programmability enables use cases across the whole IT delivery chain in terms of being able to orchestrate and automate provisioning of network infrastructure. Applications now have special, real-time access to network buffers, congestion and state information, so that they can actually make better choices and decisions on how they’re delivering services to end-users. In addition, troubleshooting can be automated through applications having much deeper visibility into the network.
The specific use cases for Cisco NX-OS API enhancements span data center network engineers and experienced DevOps personnel in cloud and large enterprise IT organization. For network engineers, NX-OS APIs can simplify and automate common network infrastructure provisioning challenges as well as offer automated troubleshooting through enhanced network visibility.
DevOps personnel may leverage NX-OS APIs and automation tools to create their own custom scripts and leverage the NX-API into other tools with which they are already familiar to customize network device data and use it in the way that’s important for them to either deliver competitive business value or to reduce OpEx through automation.
Cisco 9000 Programmability Highlights
The Cisco NX-OS enhancements for the Cisco Nexus 9000 Series supports numerous capabilities that aid automation and orchestration including providing investment protection through the support of new automation capabilities in the future. Centralized, fine-grained access to Cisco 9000 networking resources is enabled through support for XML, JSON, representational state transfer (REST), remote procedure call (RPC), NetConf, Python scripting, Bash and Broadcom chip-level shell access, and Linux containers for development of custom applications. These APIs have full read and write access to the Cisco 9000 platform, providing programmability, automation, and system access. Cisco-NX-OS also supports APIs enabling rapid integration with existing management and orchestration frameworks. These include OpenStack interfaces to provide Cisco policy consistency across physical, virtual, and cloud environments.
I am neither an AC Milan soccer fan nor a connoisseur of haute couture, so it will be no surprise if you wondered what I am doing in Europe’s fashion capital Milan, and that too in the middle of Milan’s wintry January.
Without further ado, I will break the suspense. Yes, I am one of the few, chosen as Cisco Datacenter leads for the Cisco Live Milan event. You may be thinking I have the best job in Silicon Valley, as I hop from Melbourne to London to Milan to cover Cisco Live worldwide, over the years. You are right, I do have an enviable job, bringing together the best of cisco datacenter technologies that help customers achieve more value for their investment, and I also make sure to have some fun in the process. During this event, I will be bringing you real-time excerpts of the action in the show floor, via social media. In this blog, I want to provide all you Datacenter IT and Networking professionals, highlights of various activities we have on the menu.
If like me you are fortunate to attend, I am sure you are looking forward to attend the wall-wall keynotes on Jan 28, hosted by Cisco Executives Rob Lloyd and Rob Soderbery. Rob Lloyd will discuss how Cisco and the ecosystem of Cisco’s partners are uniquely positioned to connect the unconnected with an open standard and an integrated architecture from the cloud to end devices In addition, you’ll have the opportunity to check out the latest innovations in Cisco ACI and Data Center Networking technologies. Let us pick up action at the Cisco Campus and Datacenter area in the World of Solutions.
Cisco ACI demos are at the center of all action in Datacenter switching. .These demos in particular highlight the growing significance of Cisco as a datacenter infrastructure provider. With the successful introduction of Cisco ACI and its seamless integration with Cisco UCS, FlexPod, vBlock, UCS Director etc, we are able to demonstrate why infrastructure matters and its relevance to applications. I strongly encourage you to check out the Cisco ACI-Open Stack demo that highlights the provisioning and orchestration of a multi-tenant cloud environment and virtual applications through Open Stack, as well as showing integration of Open Stack on top of the Cisco APIC interface. Many of you have been eagerly awaiting the integration of L4-L7 services from Citrix, F5 with Cisco APIC, and we have put together a demo that illustrates the set-up and insertion of multiple network services into an application network, and the routing of traffic to the required services and the virtual workload. Other ACI demos include those that showcase Cisco Nexus 9000 platform programmability and Cisco ACI integration with Hyper-v, but in the interest of time, I will let you discover the exciting details of these demos at your convenience. Besides ACI, we have Unified Fabric based demos focusing on Nexus 7k and Dynamic Fabric Automation, VXLAN integration with Nexus switching platforms etc, to illustrate the comprehensive portfolio of switching products from Cisco. You will not be disappointed at the demo floor as the best and brightest engineers from Cisco business units will be available to engage you in technical conversations.
1.VMware pricing model is fundamentally flawed, which is raising OpEx costs, and affecting network design decisions and scale.
VMware is charging customers per-port, per-VM, and increases the cost of networking by 2x or more, while providing lower functionality, increasing operations expense, and forcing you to adopt a different network architecture. ACI delivers more functionality with zero VM tax.
For VMware, our customers consistently report pricing starting at $50 or more per VM per month. In competitive engagements, pricing rapidly declines to $15 per VM per month, then lower depending on the negotiation. Customers do not like the per port pricing, the same as they do not like per VM pricing. All of those models get expensive and alter your designs and scale considerations.
2. Claims that ACI is a proprietary platform or policy model belies the fact that many aspects of VMware’s architecture require vendor lock-in, on top of the premium pricing model.
VMware claims that ACI is proprietary. Yet customers have to get their OVS from VMware not the open switch download, under open source license. Currently, VMware is the only hypervisor platform that locks customers into a proprietary controller -- RedHat, KVM, and Hyper-V all provide open access. ACI contributions are showing up in OpenStack, IETF drafts, and through VXLAN extensions, and is providing the most open implementation in the industry -- API’s, data model, and integration with 3rd party controllers. Federating NSX with 3rd party controllers, such as HP, is different that providing open, bi-directional programmability.
3. Openness is really measured by the breadth of infrastructures, OS platforms, orchestration models, etc., that are supported by the policy model, and ACI is rapidly outdistancing NSX in this area.
ACI supports any hypervisor, any encapsulation (VXLAN, NVGRE, VLAN, and even STT), any physical platform, storage, physical compute, layer 4 through 7, WAN, with full flexibility of any workload anywhere, with full policy, performance, and visibility in hardware. ACI supports Open vSwitch and allows a 3rd party controller to program ACI hardware components. Investment protection is built in supporting existing platforms, and within the Nexus 9000 products enabling you to run enhanced NXOS and ACI mode with a software upgrade.
More and more enterprises are managing distributed infrastructures and applications that need to share data. This data sharing can be viewed as data flows that connect (and flow through) multiple applications. Applications are partly managed on-premise, and partly in (multiple) off-premise clouds. Cloud infrastructures need to elastically scale over multiple data centers and software defined networking (SDN) is providing more network flexibility and dynamism. With the advent of the Internet of Things (IoT) the need to share data between applications, sensors, infrastructure and people (specifically on the edge) will only increase. This raises fundamental questions on how we develop scalable distributed systems: How to manage the flow of events (data flows)? How to facilitate a frictionless integration of new components into the distributed systems and the various data flows in a scalable manner? What primitives do we need, to support the variety of protocols? A term that is often mentioned within this context is Reactive Programming, a programming paradigm focusing on data flows and the automated propagation of change. The reactive programming trend is partly fueled by event driven architectures and standards such as for example XMPP, RabbitMQ, MQTT, DDS.
One way to think about distributed systems (complementary to the reactive programming paradigm) is through the concept of a shared (distributed) data fabric (akin to the shared memory model concept). An example of such a shared data fabric is Tuple spaces, developed in the 1980’s. You can view the data fabric as a collection of (distributed) nodes that provides a uniform data layer to the applications. The data fabric would be a basic building block, on which you can build for example a messaging service by having applications (consumers) putting data in the fabric, and other applications (subscribers) getting the data from the fabric. Similarly such a data fabric can function as a cache, where a producer (for example a database) would put data into the fabric but associates this to a certain policy (e.g. remove after 1 hour, or remove if exceeding certain storage conditions). The concept of a data fabric enables applications to be developed and deployed independently from each other (zero-knowledge) as they only communicate via the data fabric publishing and subscribing to messages in an asynchronous and data driven way.
The goal of the fabric is to offer an infrastructure platform to develop and connect applications without applications having to (independently) implement sets of basic primitives like security, guaranteed delivery, routing of messages, data consistency, availability, etc… and free up time of the developer to focus on the core functionality of the application. This implies that the distributed data fabric is not only a simple data store or messaging bus, but has a set of primitives to support easier and more agile application development.
Such a fabric should be deployable on servers and other devices like for example routers and switches (potentially building on top of a Fog infrastructure). The fabric should be distributed and scalable: adding new nodes should re-balance the fabric. The fabric can span multiple storage media (in-memory, flash, SSD, HDD, …). Storage is transparent to the application (developer), and applications should be able to determine (as a policy) what level of storage they require for certain data. Policies are a fundamental aspect of the data fabric. Some other examples of policies are: (1) time (length) data should remain in the fabric, (2) what type of applications can access particular data in the fabric (security), (3) data locality, the fabric is distributed, but sometimes we know in advance that data produced by one application will be consumed by another that is relative close to the producer.
It is unlikely that there will be one protocol or transportation layer for all applications and infrastructures. The data fabric should therefore be capable to support multiple protocols and transportation layers, and support mappings of well-known data store standards (such as object-relational mapping)
The data fabric can be queried, to enable discovery and correlation of data by applications, and support widely used processing paradigms, such as map-reduce enabling applications to bring processing to the data nodes.
It is unrealistic to assume that there will be one data fabric. Instead there will be multiple data fabrics managed by multiple companies and entities (similar to the network). Data fabrics should therefore be connected with each other through gateways creating a “fabric of fabrics” were needed.
This distributed data fabric can be viewed as a set interconnected nodes. For large data fabrics (many nodes) it will not be possible to connect each node with all other nodes without sacrificing performance or scalability, instead a connection overlay and smart routing algorithms are needed (for example a distributed hash tables) to ensure scalability and performance of this distributed data fabric. The data fabric can be further optimized by coupling this fabric (and its logical connection overlay) to the underlying (virtual) network infrastructure and exploit this knowledge to further optimize the data fabric to power IoT, Cloud and SDN infrastructures.
Special thanks to Gary Berger and Roque Gagliano for their discussions and insights on this subject.
The other week I attended the “Software Defined Networking 2013” conference in London. This is a UK-based event for the discussion of SDN, OpenFlow and Network Virtualisation Solutions from a strategic perspective. There were quite a few interesting perspective s I picked up at this conference. In particular, the conference for me reinforced the potential of SDN – but if you apply it to the wrong problem, you may not get the return you hope for!
Top of mind for me, then, coming out of this conference was a demo of “What SDN Can Do For You” from one of our competitors. At best, the phrase “using a sledge hammer to crack a nut” comes to mind.
The demo came from our friends in Palo Alto, who once (boldly but incorrectly!) predicted that “Cisco UCS would be dead a year after launch”. They gave a SDN-focused demo that, when I “peeled back the onion”, didn’t demonstrate a compelling SDN use case. Rather, it convinced me that if you have this particular problem as illustrated in their demo, you don’t need SDN: you need a new vendor!