For those who are on the learning curve on various aspects of network programmability, open networking and SDN (like we are), I’d like to invite you to the third in a series of educational webicasts on these topics. Brought under the umbrella of the Cisco Open Network Environment, this particular webcast focuses on “An introduction to onePK”, and will be broadcast on April 9th, 2013 at 9 AM PST. You can registerhere.
The Cisco Open Network Environment is all about bringing the network closer to applications. One way of doing that is by exposing network devices to applications through a rich set of APIs, that can help tap into the intelligence inherent in the hardware and ASICs as well as in the network operating systems. This is what onePK is all about – it’s a single platform kit that will span all of Cisco’s network infrastructure portfolio across Enterprise and Service Provider, exposing them to applications in a homogenous way, allowing app developers to tap into the power of the open network.
Cisco announced its Open Network Environment or Cisco ONE strategy on June 2012 and has been in execution mode since then. onePK happens to be a key proofpoint of this cross-architectural strategy.
Join me on this webcast, as I host Ayman Sayed, SVP of Cisco’s Network Operating Systems Group as the lead Cisco expert on this topic. We will also be joined by two of the development partners that are working on onePK trials including Brendon Whateley, Principal Solution Architect at Starview Inc., and Kamil Knotek, Chielf of R&D at Pramacomm Prague spol s.r.o, as well as some new demos.
If you missed the last webcast on “An Introduction to OpenFlow” with David Ward, CTO, Cisco Engineering and Chief Architect, we had a turnout from 84 countries and over 120+ questions answered by our question managers in a one-hour period. You can watch a reply of the webcast here.
Consider these impressive stats shared in a keynote from Cisco’s CTO and CSO Padmasree Warrior last week at Cisco Live, London:
50 Billion “things” including trees, vehicles, traffic signals, devices and what not will be connected together by 2020 (vs. 1000 devices connected in 1984)
2012 created more information than the past 5000 years combined!
2/3rd of the world’s mobile data will be video by 2015.
These statistics may seem a bit surprising, but the fact is, they cannot be ignored by CIOs and others chartered with the responsibility of managing IT infrastructure.
Impact on Enterprise and SP Infrastructure strategies
Further, these trends are not silo’d and are certainly not happening in a vacuum. For example, Bring-your-Own Device (BYOD) and the exponential growth of video endpoints, may be happening in the “access”, but they are causing a ripple effect upstream in the data center and cloud environments, and coupled with new application requirements, are triggering CIOs across larger Enterprise and Service Providers to rapidly evolve their IT infrastructure strategies.
It is much the same with cloud infrastructure strategies. Even as Enterprises have aggressively adopted the journey to Private Cloud, their preference for hybrid clouds, where they can enjoy the “best of both worlds” – public and private have grown as well. However, the move to hybrid clouds has been somewhat hampered by challenges as outlined in my previous blog: Lowering barriers to hybrid cloud adoption – challenges and opportunities.
The Fabric approach
To address many of these issues, Cisco has long advocated the concept of a holistic data center fabric, heart of its Unified Data Center philosophy. The fundamental premise of breaking silos, and bringing together disparate technology silos across network, compute and storage is what makes this so compelling. At the heart of it, is the Cisco Unified Fabric, serving as the glue.
As we continue to evolve this fabric, we’re making three industry-leading announcements today that help make the fabric more scalable, extensible and open.
Let’s talk about SCALING the fabric first:
Industry’s highest density L2/L3 10G/40G switch: Building upon our previous announcement of redefining fabric scale, this time we introduces a New Nexus 6000 family with two form factors – 6004 and 6001. We expect these switches to be positioned to meet increasing bandwidth demands, for spine/leaf architectures, and for 40G aggregation in fixed switching deployments. We expect the Nexus 6000 to be complementary to the Nexus 5500 and Nexus 7000 series deployments, and is not to be confused with the Catalyst 6500 or Nexus fabric interconnects.
The Nexus 6000 is built with Cisco’s custom silicon, and 1 micro-second port to port latency. It has forward propagated some of the architectural successes of the Nexus 3548, the industry’s lowest latency switch that we introduced last year. Clearly, as in the past, Cisco’s ASICs have differentiated themselves against the lowest common denominator approach of the merchant silicon, by delivering both better performance as well as greater value due to the tight integration with the software stack.
The Nexus 5500 incidentally gets 40G expansion modules, and is accompanied by a brand new Fabric Extender – the 2248PQ, which comes with 40G uplinks as well. All of these, along with the 10G server interfaces, help pair the 10G server access with 40G server aggregation.
Also as part of the first step in making the physical Nexus switches services ready in the data center, a new Network Analysis Module (NAM) on the Nexus 7000 also brings in performance analytics, application visibility and network intelligence. This is the first services module with others to follow, and brings in parity with the new vNAM functionality as well.
Industry’s simplest hybrid cloud solution: Over the last few years, we have introduced several technologies that help build fabric extensibility -- the Fabric Extender or FEX solution is very popular extending the fabric to the server/VM, as are some of the Data Center Interconnect technologies like Overlay Transport Virtualization (OTV) or Location ID Separation Protocol (LISP), among others. Obviously each have their benefits.
The Nexus 1000V Intercloud takes these to the next level by allowing the data center fabric to be extended to provider cloud environments in a secure, transparent manner, while preserving L4-7 services and policies. This is meant to help lower the barriers for hybrid cloud deployments and is designed to be a multi-hypervisor, multi-cloud solution. It is expected to ship in the summer timeframe, by 1H CY13.
This video does a good job of explaining the concepts of the Intercloud solution:
Cloud computing has evolved from the hype cycle of the last few years, to being an integral part of the Enterprise IT strategy as well as a fundamental service provider offering. The types of cloud constructs have evolved as well – public, private, hybrid and community clouds are all the basic variants, with more sophisticated application-specific cloud offerings continuing to evolve.
While the journey to the private cloud has been continuing and relatively maturing, at least in the more developed countries, and public cloud services offerings are becoming relatively ubiquitous, adoption and deployment of hybrid cloud offerings have had a relatively modest uptake.
The reason for this is not because the allure of hybrid clouds is unappealing, or that it has few use-cases. It is quite the opposite. There are several use-cases all of which are applicable to real-world IT deployments today:
Workload migration: Seamless migration of workloads from the data center or private cloud to the public cloud for better capacity utilization.
Dev/QA operations: Testing of new applications can induce requirement for additional temporary capacity and having an extensible hybrid cloud is quite appealing, instead of investing in on-premise infrastructure.
Cloud-bursting: To handle the needs of bursty applications, temporary capacity allocation in public cloud environments can be extremely cost-effective, providing the convenience of “infrastructure-on-demand”
Disaster recovery: Providing data resiliency in case of failure of on-premise resources
If the use-cases are real and the benefits are so apparent, why have Enterprise not gone all out to deploy more robust hybrid clouds? Why have only few Enterprise and selective applications followed this model?
I can think of a few. To make it real, let’s consider the use-case of migrating a virtual machine (VM) from the private cloud to a provider cloud, as an example to illustrate some of the challenges: