What better way to spend Valentine’s day than to watch a webcast on OpenFlow and SDN, perhaps with your significant other? The last couple of years have seen considerable buzz around aspects of software-defined networking. A significant portion of the early seed discussion was around OpenFlow. As part of the Cisco Open Network Environment webcast series, this time on February 14th, 2013 at 9 AM PST, we take look at an :Introduction to OpenFlow”: What is it? How does it work? What are some of the potential use-cases?
Joining me in this discussion with be David Ward, Cisco CTO of Engineering and Chief Architect. At the time of recording David also wears the hat of the being the Chair of the Technical Advisory Group at Open Network Foundation (ONF). So he brings perspectives both as someone who’s driving the evolution of the protocol, as well as somebody guiding its implementation across several products within the Cisco portfolio.
Also joining the webcast to lend end-user perspectives will be Matt Davy, who is formerly of Indiana University, having been the executive director of the INCenter facility there. Matt’s recently moved onto a new role, but he built a lighthouse test bed around OpenFlow and SDN the last few years during this employment at the university. Matt will talk about campus slicing and his experiences around OpenFlow. Providing service provider perspectives from NTT communications will be Yuichi Ikejiri, Director of the Network Technology Services division.
As mentioned before, this is part of an educational series. If you’ve not watched the first in the series, entitled “An Introduction to OpenStack” – please feel free to register and watch it here. The panel of Lew Tucker and Raj Patel below provide interesting perspectives on OpenStack.
Consider these impressive stats shared in a keynote from Cisco’s CTO and CSO Padmasree Warrior last week at Cisco Live, London:
50 Billion “things” including trees, vehicles, traffic signals, devices and what not will be connected together by 2020 (vs. 1000 devices connected in 1984)
2012 created more information than the past 5000 years combined!
2/3rd of the world’s mobile data will be video by 2015.
These statistics may seem a bit surprising, but the fact is, they cannot be ignored by CIOs and others chartered with the responsibility of managing IT infrastructure.
Impact on Enterprise and SP Infrastructure strategies
Further, these trends are not silo’d and are certainly not happening in a vacuum. For example, Bring-your-Own Device (BYOD) and the exponential growth of video endpoints, may be happening in the “access”, but they are causing a ripple effect upstream in the data center and cloud environments, and coupled with new application requirements, are triggering CIOs across larger Enterprise and Service Providers to rapidly evolve their IT infrastructure strategies.
It is much the same with cloud infrastructure strategies. Even as Enterprises have aggressively adopted the journey to Private Cloud, their preference for hybrid clouds, where they can enjoy the “best of both worlds” – public and private have grown as well. However, the move to hybrid clouds has been somewhat hampered by challenges as outlined in my previous blog: Lowering barriers to hybrid cloud adoption – challenges and opportunities.
The Fabric approach
To address many of these issues, Cisco has long advocated the concept of a holistic data center fabric, heart of its Unified Data Center philosophy. The fundamental premise of breaking silos, and bringing together disparate technology silos across network, compute and storage is what makes this so compelling. At the heart of it, is the Cisco Unified Fabric, serving as the glue.
As we continue to evolve this fabric, we’re making three industry-leading announcements today that help make the fabric more scalable, extensible and open.
Let’s talk about SCALING the fabric first:
Industry’s highest density L2/L3 10G/40G switch: Building upon our previous announcement of redefining fabric scale, this time we introduces a New Nexus 6000 family with two form factors – 6004 and 6001. We expect these switches to be positioned to meet increasing bandwidth demands, for spine/leaf architectures, and for 40G aggregation in fixed switching deployments. We expect the Nexus 6000 to be complementary to the Nexus 5500 and Nexus 7000 series deployments, and is not to be confused with the Catalyst 6500 or Nexus fabric interconnects.
The Nexus 6000 is built with Cisco’s custom silicon, and 1 micro-second port to port latency. It has forward propagated some of the architectural successes of the Nexus 3548, the industry’s lowest latency switch that we introduced last year. Clearly, as in the past, Cisco’s ASICs have differentiated themselves against the lowest common denominator approach of the merchant silicon, by delivering both better performance as well as greater value due to the tight integration with the software stack.
The Nexus 5500 incidentally gets 40G expansion modules, and is accompanied by a brand new Fabric Extender – the 2248PQ, which comes with 40G uplinks as well. All of these, along with the 10G server interfaces, help pair the 10G server access with 40G server aggregation.
Also as part of the first step in making the physical Nexus switches services ready in the data center, a new Network Analysis Module (NAM) on the Nexus 7000 also brings in performance analytics, application visibility and network intelligence. This is the first services module with others to follow, and brings in parity with the new vNAM functionality as well.
Industry’s simplest hybrid cloud solution: Over the last few years, we have introduced several technologies that help build fabric extensibility -- the Fabric Extender or FEX solution is very popular extending the fabric to the server/VM, as are some of the Data Center Interconnect technologies like Overlay Transport Virtualization (OTV) or Location ID Separation Protocol (LISP), among others. Obviously each have their benefits.
The Nexus 1000V Intercloud takes these to the next level by allowing the data center fabric to be extended to provider cloud environments in a secure, transparent manner, while preserving L4-7 services and policies. This is meant to help lower the barriers for hybrid cloud deployments and is designed to be a multi-hypervisor, multi-cloud solution. It is expected to ship in the summer timeframe, by 1H CY13.
This video does a good job of explaining the concepts of the Intercloud solution:
Cloud computing has evolved from the hype cycle of the last few years, to being an integral part of the Enterprise IT strategy as well as a fundamental service provider offering. The types of cloud constructs have evolved as well – public, private, hybrid and community clouds are all the basic variants, with more sophisticated application-specific cloud offerings continuing to evolve.
While the journey to the private cloud has been continuing and relatively maturing, at least in the more developed countries, and public cloud services offerings are becoming relatively ubiquitous, adoption and deployment of hybrid cloud offerings have had a relatively modest uptake.
The reason for this is not because the allure of hybrid clouds is unappealing, or that it has few use-cases. It is quite the opposite. There are several use-cases all of which are applicable to real-world IT deployments today:
Workload migration: Seamless migration of workloads from the data center or private cloud to the public cloud for better capacity utilization.
Dev/QA operations: Testing of new applications can induce requirement for additional temporary capacity and having an extensible hybrid cloud is quite appealing, instead of investing in on-premise infrastructure.
Cloud-bursting: To handle the needs of bursty applications, temporary capacity allocation in public cloud environments can be extremely cost-effective, providing the convenience of “infrastructure-on-demand”
Disaster recovery: Providing data resiliency in case of failure of on-premise resources
If the use-cases are real and the benefits are so apparent, why have Enterprise not gone all out to deploy more robust hybrid clouds? Why have only few Enterprise and selective applications followed this model?
I can think of a few. To make it real, let’s consider the use-case of migrating a virtual machine (VM) from the private cloud to a provider cloud, as an example to illustrate some of the challenges:
Among all the IT domains, perhaps the most action is in the data center, and by extension, in the cloud. Virtualization has taken root, and delivered a lot of operational efficiency. It has provided some interesting challenges as well. Virtual Machine (VM) mobility is one. Tracking workloads as they move between servers, within and across data centers is more fun than most people imagined. So, how does one take this dynamic environment, and leverage it to fulfill requirements such as:
Delivering anything as a service – handling heterogeneous workloads for any application
Dealing with VM mobility – optimizing resource allocation across any location
Offering dynamic response – responding to real-time requirements at any scale
How does one solve these emerging challenges to achieve the next levels of productivity and efficiency?
For quite some time, Cisco has believed in the promise of “going beyond silos” (Yeah, that’s the campaign we launched as well, for those of you who saw the recent ads). But awareness campaigns apart, the concept is pretty simple – how do we take some of the traditional silos in the data center like the network, compute, storage and application services and bring them together – holistically – to deliver better efficiency, resource utilization, simplicity and cost benefits.
Fundamentally, this is the promise of Cisco’s data center fabric approach – it delivers on the vision of a high-performance, shared infrastructure, that brings together the network, compute, storage access elements, and L4-7 application services into a tightly integrated resource. It is open, integrated, flexible, scalable, resilient and secure. And it is built off a vision that Cisco has been executing for 3+ years now on the foundation of Unified Fabric, Unified Network Services and Unified Computing. This foundation will form the bedrock for customers looking to move towards cloud-based models exploring application independence, location freedom and massive scale.