For those of you wondering about the impact to Cisco of Software Defined Networking and the combined SDN strategy of VMware and Nicira, I point you to a very rational and well-articulated article by Mike Fratto of Network Computing, that basically says Cisco doesn’t have much to worry about. (Enterprise Strategy Group had already said something similar, by the way).
Specifically, Fratto says:
The lack of programmability in existing networking hardware is certainly a problem, but VMware’s acquisition of Nicira does not mean that Cisco and its ilk will be marginalized… It does mean the role and management of the physical network is changing, and I think Cisco is further ahead than most of its competitors in creating a vision for the next phase of networking.
I couldn’t agree more. Since Cisco live! when we announced our Cisco ONE strategy for network programmability as well as the advances in our Nexus 1000V portfolio for virtual network overlays, I have been posting on many of the same points.
My take here was that the VMware-Nicira acquisition did not portend a strategic break with Cisco, and while there are some obvious overlaps in our product lines, there are still a number of areas of collaboration, cooperation and interoperability. The virtual network infrastructure is just one piece of a larger software stack and the differentiation will likely be decided in the orchestration, management and applications built on top of the newly programmable infrastructures sometime down the road. Read More »
Tags: Cisco ONE, Cisco Open Network Environment, FabricPath, LISP, Nexus 1000v, Nexus 5000, Nexus 7000, Nicira, OpenStack, OTV, SDN, software defined networking, virtual network overlays, VMware, vPath, VXLAN
Stretching the Olympic theme of my previous blog, where I used the analogy of a 100m sprinter and his backup team to introduce the new Cisco Intelligent Automation for Cloud Deployment Services, I’d like to now discuss how to roll out new cloud projects in the data center. Thinking again about a team of Olympic champions – and the Team GB (Great Britain) cycling team, illustrate this principle so well – with their fabulous winning streak, not least the incredibly exciting keirin event win by my countryman Sir Chris Hoy (yes, fellow Scot, however that’s where the association ends ). Such teams don’t often win with a “big bang” all-at-once, approach. Their training and successes usually builds incrementally, over several years and phases.
In the case of Team GB Cycling, they have developed from practically “also rans” in 1998 to consistent world beaters in Beijing 2008 and now London 2012. They have improved incrementally, event by event, year by year, demonstrating incremental successes as they went along, to be world beaters. In essence, they have used an approach we in Cisco sometimes call “Crawl, Walk, Run”, illustrating the progress to success. From my experience over the past 25 years in IT, there are big lessons here for IT project delivery. Let’s use a Cloud Automation project as an example.
Read More »
Tags: cisco_services, cloud, cloud_computing, data center, intelligent automation
Recently, I wrote an article on PaaS for IT BusinessEdge entitled the road PaaS, understanding your post IaaS options. Here’s an excerpt.
The Road to PaaS
PaaS is an enticing proposition that has generated a lot of market buzz.
But PaaS forces tradeoffs and it shouldn’t be seen as a one-size-fits-all proposition.
To understand, I like to draw the distinction between what I call “Silicon Valley PaaS” and “Enterprise PaaS.” The majority of the discussion in the market today revolves around the Silicon Valley PaaS pattern, which is a truly abstracted “black box” approach to software platforms.
This form of PaaS exposes a set of standardized services to which you write your applications, completely sheltering developers from the underlying complexity below the PaaS abstraction.
It makes a lot of sense for brand-apps built with modern frameworks like Python and Ruby in greenfield development environments that are highly standardized.
The basic premise of the post is that PaaS for an enterprise is VERY different from PaaS for a Silicon Valley start up. And nowhere is it more different than in the network requirements.
The PaaS customer is a developer who will code an application, use the underlying services offered by the PaaS stack, such a database, storage, queueing, etc. The developer deploys the code, selects a few options and code is live.
So what’s going on with the network? Well, the PaaS layer will need to auto-scale, fail-over and deliver performance at some level. It may need it’s own domain as well. That PaaS layer will need to talk to underlying network services such as firewalls, switches, etc. That PaaS really needs access to infrastructure models that deliver network containers to whatever PaaS abstraction the PaaS layer has.
Hard enough to do when all the containers are the same, as it would be in a Silicon Valley PaaS offering.
It doesn’t work with the existing enterprise platforms. This is a big opportunity for innovation
Tags: Cisco Intelligent Automation for Cloud, cloud, Cloud Management, intelligent automation, orchestration, paas, Service Orchestration, unified management
Continuing on our theme of virtual network overlays and programmable networks, today we’ll look at how to increase workload mobility over more data center and cloud resources. If server virtualization increases resource utilization and reduces costs, and data center consolidation is a good thing, then it follows that the larger the resource pool that your virtual workloads can migrate over, the more cost effective your IT operation can be. And if your mobility diameter spans multiple sites, you can obviously improve your fault tolerance as well. We call this increasing your mobility diameter, and we’ll complement what we’ve already learned about VXLAN and virtual overlays with some new technologies to seamlessly scale your diameter up. (Sounds like some sort of bizarre reverse Weight Watchers program, doesn’t it?).
As we noted in our VXLAN overview, VXLANs enable private virtual overlays over layer 3 boundaries via their MAC in UDP encapsulation and the cool way they filter MAC address broadcasts to only the right subnets. However, when you are doing full on application migration over a layer 3 boundary, VXLAN alone isn’t going to do it alone. In order to extend virtual workload mobility beyond layer 2 boundaries, Cisco came up with Overlay Transport Virtualization (OTV) that can work in conjunction with VXLAN to extend application mobility to any point the VXLAN virtual overlay can reach. And not surprisingly, the media wizards over at TechWise TV have a great video that takes all the complexity of OTV and makes it cartoonishly simple.
But wait, there’s more… Read More »
Tags: Nexus 1000v, OTV, Overlay Transport Virtualization, virtual network overlays, VXLAN
For me, even though I am mostly a hardware geek, one of the coolest parts of the Cisco ONE launch at CiscoLive was the introduction of onePK. We see onePK as an core enabling technology that will have some cool stuff down the road.
So, one of the more common questions I get is about the relationship between onePK and other technologies related to network programmability such as OpenFlow (OF). Many folks mistakenly view this as an either/or choice. To be honest, when I first heard about onePK, I thought it was OpenFlow on steroids too; however, I had some fine folks from NOSTG educate me on the difference between the two. They are, in fact, complementary and for many customer scenarios, we expect them to be used in concert. Take a look at the pic below, which shows how these technologies map against the multi-layer model we introduced with Cisco ONE:
As you can see, onePK gives developers comprehensive, granular programmatic access to Cisco infrastructure through a broad set of APIs. One the other hand, protocols such as OpenFlow concern themselves with communications and control amongst the different layers—in OpenFlow’s case, between the control plane and the forwarding plane. Some folks have referred to onePK as a “northbound” interface and protocols such as OpenFlow as “southbound” interfaces. While that might be helpful to understand the difference between the two technologies, I don’t think that this is a strictly accurate description. For one thing, developers can use onePK to directly interact with the hardware. Second, our support for other protocols such as OpenFlow is delivered through agents that are built using onePK.
That last part, about the agent support is actually pretty cool. We can create agents to provide support for whatever new protocols come down the pike by building them upon onePK. This allows flexibility and future-proofing while still maintaining a common underlying infrastructure for consistency and coherency.
For instance, we are delivering our experimental OF support by building it atop the onePK infrastructure. For customers this is a key point, they are not locked into a single approach—they can concurrently use native onePK access, protocol-based access, or traditional access (aka run in hybrid mode) as their needs dictate. Because we are building agents atop onePK, you don’t have to forgo any of the sophistication of the underlying infrastructure. For example, with the forthcoming agent for the ASR9K, we expect to have industry leading performance because of the level of integration between the OF agents and the underlying hardware made possible by onePK.
In closing, you can see how extensible our programmatic support is with the ability to use onePK natively or to support technologies and protocols as they are developed and released. This gives customers a remarkable level of flexibility, extensibility and risk mitigation.
Tags: ASIC, asr9k, ciscolive, netconf, onePK, OpenFlow, SDN