Cisco Blogs


Cisco Blog > Data Center and Cloud

Network Services Headers (NSH): Creating a Service Plane for Cloud Networks

November 19, 2014 at 9:00 am PST

In the past, we have pointed out that configuring network services and security policies into an application network has traditionally been the most complex, tedious and time-consuming aspect of deploying new applications. For a data center or cloud provider to stand up applications in minutes and not days, easily configuring the right service nodes (e.g. a load balancer or firewall), with the right application and security policies, to support the specific workload requirements, independent of location in the network is a clear obstacle that has to be overcome.

Let’s say, for example, you have a world-beating best-in-class firewall positioned in some rack of your data center. You also have two workloads that need to be separated according to security policies implemented on this firewall on other servers a few hops away. The network and security teams have traditionally had a few challenges to address:

  1. If traffic from workload1 to workload2 needs to go through a firewall, how do you route traffic properly, considering the workloads don’t themselves have visibility to the specifics of the firewalls they need to work with. Traffic routing of this nature can be implemented in the network through the use of VLAN’s and policy-based routing techniques, but this is not scalable to hundreds or thousands of applications, is tedious to manage, limits workload mobility, and makes the whole infrastructure more error-prone and brittle.
  2. The physical location of the firewall or network service largely determines the topology of the network, and have historically restricted where workloads could be placed. But modern data center and cloud networks need to be able to provide required services and policies independent of where the workloads are placed, on this rack or that, on-premises or in the cloud.

Whereas physical firewalls might have been incorporated into an application network through VLAN stitching, there are a number of other protocols and techniques that generally have to be used with other network services to include them in an application deployment, such as Source NAT for application delivery controllers, or WCCP for WAN optimization. The complexity of configuring services for a single application deployment thus increases measurably.

Read More »

Tags: , , , , , ,

Open Standards, Open Source, Open Loop

As the IETF (Internet Engineering Task Force) meets in Hawaii (IETF 91), the unavoidable question for both participants and observers is whether a Standards Development Organization (SDO) like the IETF is relevant in a rapidly expanding environment of Open Source Software (OSS) projects.

For those new to the conversation, the open question is NOT whether SDOs should exist.  They are a political reality inexorably tied to trade policies and international relationships.  The fundamental reason behind their existence is to avoid a communications Tower of Babel (with the resulting economic consequences) and establish governance over the use of global commercial and information infrastructure (not just acceptable behavior, but the management of resources like addressing as well).  Rather, the question is about their role going forward in enabling innovation. 

SDO Challenges

SDOs (like the IETF) have to evolve their processes Read More »

Tags: , , , , , , , , , , , , ,

Cisco’s OpenH264 Now Part of Firefox

Voice and video communications over IP have become ubiquitous over the last decade, pervasive across desktop apps, mobile apps, IP phones, video conferencing endpoints, and more.  One big barrier remains: users can’t collaborate directly from their web browser without downloading cumbersome plugins for different applications.  WebRTC – a set of extensions to HTML5 – can change that and enable collaboration from any browser. However, one of the major stumbling blocks in adoption of this technology is a common codec for real-time video.

The Internet Engineering Task Force (IETF) and World Wide Web Consortium (W3C) have been working jointly to standardize on the right video codec for WebRTC. Cisco and many others have been strong proponents of the H.264 industry standard codec. In support of this, almost a year ago Cisco announced that we would be open sourcing our H.264 codec and providing the source code, as well as a binary module that can be downloaded for free from the Internet. Perhaps most importantly, we announced that we would not pass on our MPEG-LA licensing costs for this binary module, making it effectively free for applications to download the module and communicate with the millions of other H.264 devices. At that time, Mozilla announced its plans to add H.264 support to Firefox using OpenH264.

Since then, we’ve made enormous progress in delivering on that promise. We open sourced the code, set up a community and website to maintain it, delivered improvements and fixes, published the binary module, and have made it available to all. This code has attracted a community of developers that helped improve Read More »

Tags: , , , , , , , , , ,

Introducing OpFlex – A new standards-based protocol for Application Centric Infrastructure

Continuing on its tradition of contributing and committing to open source and open standards over the last 25 years, today Cisco announced “OpFlex” – a new open standards-based protocol for Application Centric Infrastructure that has been submitted into the IETF standardization process. We believe this will accelerate multi-vendor innovation in data center and cloud networks to drive operational simplicity, lower costs and increased agility.

Why is this required?

Traditional SDN models today function on the basis of an imperative control model with a centralized controller and distributed network entities that support the lowest common denominator feature set across vendors such as bridges, ports and tunnels. As the network scales, the controller becomes a bottleneck due to the need to maintain increased state, and starts to impact performance and resiliency. Likewise, because the applications, ops and infrastructure requirements need to be translated into network configuration, it impacts agility and introduces a manual learning process, requiring app developers to describe their requirements in low-level constructs.

OpFlex1

If we contrast that with the vision of the ACI model with the Application Policy Infrastructure Controller (APIC), ACI adopts a declarative management approach. This model abstracts applications, operations and infrastructure providing simplification and agility. By distributing complexity to the edges, it also increases better scalability, and allows for resiliency – i.e. the data forwarding can still continue to happen even if there is no controller. It further provides ease of use with self-documenting policies automatically deployed or cleaned up from devices as necessary. All of these help circumvent the issues seen in traditional SDN models.

For this declarative model to work across a multi-vendor environment, to translate and map policy definition into the infrastructure, there has hitherto been no standard protocol to do that across physical/virtual switches, routers and L4-L7 network services. This vacuum has led to the development of “OpFlex” – a new open standard recently submitted to the IETF.

Who is contributing to OpFlex?

Several industry leaders and practitioners are actively involved in the standardization process. These include Microsoft, IBM, Citrix and SunGard Availability Services, in addition to Cisco.

Read More »

Tags: , , , , , , , , , , , ,

Open Source is just the other side, the wild side!

March is a rather event-laden month for Open Source and Open Standards in networking: the 89th IETF, EclipseCon 2014, RSA 2014, the Open Networking Summit, the IEEE International Conference on Cloud (where I’ll be talking about the role of Open Source as we morph the Cloud down to Fog computing) and my favorite, the one and only Open Source Think Tank where this year we dive into the not-so-small world (there is plenty of room at the bottom!) of machine-to-machine (m2m) and Open Source, that some call the Internet of Everything.

There is a lot more to March Madness, of course, in the case of Open Source, a good time to celebrate the 1st anniversary of “Meet Me on the Equinox“, the fleeting moment where daylight conquered the night the day that project Daylight became Open Daylight. As I reflect on how quickly it started and grew from the hearts and minds of folks more interested in writing code than talking about standards, I think about how much the Network, previously dominated, as it should, by Open Standards, is now beginning to run with Open Source, as it should. We captured that dialog with our partners and friends at the Linux Foundation in this webcast I hope you’ll enjoy. I hope you’ll join us in this month in one of these neat places.

As Open Source has become dominant in just about everything, Virtualization, Cloud, Mobility, Security, Social Networking, Big Data, the Internet of Things, the Internet of Everything, you name it, we get asked how do we get the balance right? How does one work with the rigidity of Open Standards and the fluidity of Open Source, particularly in the Network? There is only one answer, think of it as the Yang of Open Standards, the Yin of Open Source, they need each other, they can not function without the other, particularly in the Network.  Open Source is just the other side, the wild side!

Tags: , , , , , , , , , , , , , , , , , ,