I was given the opportunity at Interop NY last week to give a 10-15min presentation at the Cisco booth. If you were watching the twitter stream, you probably noticed the pictures of some of the full audiences we had throughout both days in the booth.
I spoke about cloud and networking, something that both Brian Gracely and James Urquhart blogged about recently. Read on for my slides and some narrative comments. I apologize ahead of time for not embedding the slides, but unfortunately that little feature doesn’t seem to be working currently. We’ve got a white paper on the same topic as well as a webcast series that Brian Gracely has been blogging about.
Hi. I’m not going to define cloud. Let’s talk about how we’re changing networking.
There are 3 fundamentals to IT: users, applications, and data. Users need to get to applications and data. Applications need to get to users and data. Data needs to get to users and applications. Client/server, Mainframe, Web 2.0, Big Data, Cloud, Virtualized or Bare Metal--doesn’t matter. The right users and the right applications need to get the right data at the right time in the right context with the right quality of service and the right quality of experience. Without a network, it doesn’t happen. Without a network, there’s no value. No network, no cloud.
The changes we’re watching and experiencing in how IT is done, delivered, operated, and consumed is driven by applications. In two broad categories, you have client/server or legacy apps and then you have web/highly parallel/etc apps. Virtualization is an enabling technology, not an application category. Some amount of “legacy” enterprise apps are or will be virtualized. Some amount of next-gen apps are or will be virtualized.
Virtualization has spread far and wide the idea of infrastructure abstraction and resource pooling. Cisco has begun to talk about a “Data Center Fabric” idea. You view compute, storage, and networking in terms of capabilities and resources instead of boxes and locations. You are able to define the resources you need and receive them from whatever bit of physical/virtual infrastructure has the capabilities to provide those resources--whether that be via API calls from an orchestration engine or via an engineer at a CLI.
Within this fabric, you should be able carve and re-carve slices of isolated resources and capabilities for a particular application, group, unit, customer, etc. This ability should protect you from noisy and nosy neighbors as well as protect them from you. This is the fundamental idea behind our Virtual Multi-Tenant Data Center designs and work on Network Containers.
And regardless of where or how users and applications get to each other and to data, that security must be applied to them as well. It’s not enough to just secure and optimize the fabric and containers. The policies must reach out from the data center, through intermediary networks, all the way to the end user (or application) and operate as consistently as possible across the whole transaction space.
If this holds, then we have some implications to consider. For any given customer, service provider, or organization, there will be more than one type of application, more than one type of traffic pattern, more than one type of requirements profile. The technology decisions you make should not box you into a brittle architecture or operating model. If your infrastructure makes you architecturally inflexible, every new application paradigm you have to adopt turns into a study in pain tolerance. There’s a reason Catalyst 6500s have been running in data centers for upwards of 10 years and more: because they enabled architectural flexibility to adapt to the changing application and operating paradigms for that long. The same flexibility is core to the Nexus line--to be able to service every known and known-to-be-coming paradigm that is out there.
But being highly versatile tends to have a proportional cost in complexity. Cisco is working against this complexity by building a single operating model across the Nexus line which limits the burdens on architects, engineers, and operators of having to constantly change to a new mental context when dealing with the network from a virtual switch to a physical access switch to core switches to SANs and out across WANs and VPNs and so on.
Management systems should also become respectively simpler. Cisco’s DCNM product is providing an ever better single pane of glass, single way of seeing and understanding the workings of the whole network in a single space--from virtual to physical, from Ethernet to FC to FCoE.
Often cited challenges like VM mobility are executable via resources available in disparate or stretched fabrics where you know your capabilities on either side and can maintain a consistent way of managing and operating the networks. The bigger challenge here is understanding the risks with regards to new traffic patterns, latency, link saturation, security policy mobility, optimization policy mobility, etc.
Keeping in mind that VMs or applications are only one end of a transaction, we have to think about the mobility of users and variety of devices they use. Security, optimization, and quality of service must be applied to the user wherever they are on whatever device they happen to be using and through whatever network they are traversing. This goes for both ends of the transaction. Policies have to be disaggregated from devices and their topological locations and become attached to users and applications instead. Cisco is doing this via work towards the same policy framework in appliances, software, virtual appliances, device modules, and clients. See UNS, for an example.
Finally, automation. Networking has resisted automation for a long time. The same kind of ability to provision virtual machines, blades, and storage tiering, for example, must come to networking. Cisco is working on this through our management systems and via recent acquisitions that will make network provisioning and re-provisioning a part of the normal workflow of operating the infrastructure.
And if we can automate it, we can expose that capability via APIs. The same way you can have programmatic control over racks of UCS servers, we’re driving programmability into networking by creating an abstraction layer that can be controlled via a consistent API across our products. Instead of a few hours labor at a dozen CLIs or a brittle set of perl scripts that can’t adapt to changes, how about a few minutes spent to write an API call into a program?
Here’s a summary. These are the things we’re working on.
Of course there are some assumptions here, like the idea that the cloud (whatever it is) has to live somewhere and that that somewhere is a data center of one kind or another. Keep that in mind. And remember, 1) this is thinking in progress and 2) YMMV