By Tina Lam, Product Manager
MPLS based Layer 2 VPN has been around for over 10 years since the inception of IETF Pseuduowire Edge to Edge (PWE3) Working Group. Many drafts and standards have been added, since then, to address different applications and to improve scale and convergence in different topologies. L2VPN as a whole is widely deployed in both service providers and enterprises, from Ethernet services, to fixed and mobile convergence, to enterprise campus layer-2 transport.
Recently, one emerging driver that has been picking up a lot of momentum is to use L2VPN for Data Center Interconnect (DCI). Data centers are often situated in different locations, to be geo-redundant for the purpose of workload mobility and business continuity. At the same time the physical location of the data center has to be transparent to users and to applications. Hence the need for layer-2 connectivity between sites. While Ethernet over MPLS (EoMPLS) and Virtual Private LAN Service (VPLS) have been used for this purpose, DCI presents new requirements and challenges not fully addressed today. To keep the data center always on, and to utilize all the resources and links as efficiently as possible, data centers need all-active redundancy and load balancing. The technology should be as simple as possible to provision and manage Read More »
Tags: cisco live, data center, Data Center Interconnect, DCI, E-VPN, EoMPLS, mpls, PBB-EVPN, Service Provider, VPLS
A team of us at Cisco has been working, together with industry colleagues, on defining and standardizing a new Layer 2 VPN solution known as Ethernet Virtual Private Network or E-VPN. In this post, I will discuss the key requirements that helped shape this solution, and attempt to shed some light on the drivers for the technology and how it enables the evolution of Service Provider L2VPN offerings. Read More »
Tags: Data Center Interconnect, E-LAN, E-Line, E-Tree, E-VPN, IP/MPLS, L2VPN, Service Provider, Virtual Private Clouds
Data Center Connections using “nV edge”
The ASR 9000 product family has recently come out with a new feature called nV Edge (nV = Network Virtualization). This feature unifies the data center edge control, data and management planes. So, I’ll note a couple things here on this feature and then tell you why I think it has potential to be truly awesome.
My good friend Rabiul Hasan just wrote a proof of concept document just posted to Design Zone that provides the configuration and setup details. I encourage you to go check it out here.
Read More »
Tags: asr 9000, asr9k, Data Center Interconnect, DCI, nV, nV Edge, VPLS
I previously discussed using LISP to optimize your client-server traffic so today I’ll discuss the reverse direction: Egress Path Optimization from the Server to the Client. Let’s go over the need for Path Optimization in the direction from Server-to-Client with some pictures and explanations.
The Virtual Machine (VM) server is configured with a default gateway IP address, 192.168.1.1, which is the next hop IP address that the VM will forward packets towards as the traffic returns to the client outside the data center. In this data center environment, we’ve deployed the default gateway using the First Hop Redundancy Protocol (FHRP). In reality, FHRP is an umbrella technology term that includes Hot Standby Routing Protcol (HSRP) and Virtual Router Redundancy Protocol (VRRP), two main technologies that provide transparent failover and redundancy at the first hop IP router. Please see info on FHRP here.
Also notice that the VM default gateway is the same as the HSRP Virtual IP Address (VIP). The HSRP VIP binds itself to one of the physical HSRP Routers via an HSRP election process using Layer 2 control packets between the two physical HSRP Routers and this means that the VM default gateway, since it points to a VIP, may move between physical HSRP Routers, and of course which is then intent and design when using any type of FHRP.
In the above picture, the Path is Optimized from Server to Client, so now let’s take a look at what happens when we migrate the VM to the new data center.
Read More »
Tags: cloud, data center, Data Center Interconnect, DCI, FHRP, HSRP, LISP, mobility, N7K, Nexus 7000, OTV, vMotion, Workload Mobility
Today I want to bring up DCI use case that I’ve been thinking about: capacity expansion. As you know, the purpose of DCI is to connect two or more Data Centers together so that they share resources and deliver services. The capacity expansion use case is when you have temporary traffic bursts, cloud bursts, either planned or unplanned, maintenance windows, migrations or really any temporary service event that requires additional service capacity.
To start addressing the challenge of meeting these planned and unplanned cloud burst and capacity expansion requirements, check out the new ACE + OTV feature called Dynamic Workload Scaling announced recently.
Read More »
Tags: ACE, Burst, Capacity Expansion, Cisco, cloud, Cloud Burst, data center, Data Center Interconnect, DC, DCI, DWS, Dynamic Workload Scaling, locality, Nexus 7000, OTV, SASU, Systems Architecture and Strategy Unit, virtual machine, VM, VM Locality