Data Center Interconnect using nV Edge Clustering

July 13, 2012 - 0 Comments

Data Center Connections using “nV edge”

The ASR 9000 product family has recently come out with a new feature called nV Edge (nV = Network Virtualization).   This feature unifies the data center edge control, data and management planes. So, I’ll note a couple things here on this feature and then tell you why I think it has potential to be truly awesome.

My good friend Rabiul Hasan just wrote  a proof of concept document just posted to Design Zone that provides  the configuration and setup details.  I encourage you to go check it out here.

First, let’s look at an existing resilient edge  solution using MC-LAG and VPLS.


On the left side we have a Nexus  switch  (say Nexus 7000) that is configured with a Virtual Port Channel with one physical link towards the  top ASR 9000 and the second physical link going towards the standby ASR 9000.  The point here is that you have the port channel split between two physical ASR 9000  routers  so if one goes down, you still have an active link.  The second point to note is that only 50% of the links are utilized.

(Also note that typically you’ll have a second Nexus switch for redundancy purposes).

The ASR 9000 routers are in an active/standby “Redundancy Group”, with one active and one  in standby.  This ultimately means that we’ll need 4 VPLS Pseudowires between each pair of ASR 9000 routers.   This also means that we may have WAN ports un(der) utilized, depending on how you set those up.

Please see this detailed document for specifics around MC-LAG and VPLS written by DCI Technical Leaders Max Ardica,  Patrice Bellagamba and Nash Darukhanawalla

Now let’s look at using the new ASR 9000 nV Edge feature with VPLS.

Each ASR 9000 has two RSP’s (Route Switch Processor’s), one active and one backup.  Each of these RSP’s have two EOBC ports (Ethernet Out of Band Channel) so there are 4 EOBC connections between the ASR’s.   The EOBC connections may be point to point or go through a switch. The EOBC is where the control and management plane synchronizes between the two physical chassis.  These EOBC ports are the SFP ports on the RSP440.



On the bottom here (orange line), we have an IRL – Inter Rack Link. This IRL is a 10-GE line card link between the chassis with a minimum requirement of two IRL’s.  So, please note the IRL’s are on the line cards themselves and you have to configure the port with the IRL nV edge configuration (please see above links).  This is because the IRL’s are in the data plane and, as such, need the line card forwarding logic for FIB lookup and packet forwarding.

The EOBC + IRL functionality is what creates this  nV Edge cluster functionality yielding both ASR 9000 routers active in the control, management and data planes.

How do we get this deployed

 Step 1: First thing you need to do is ensure you got the right HW/SW combo.

Each physical ASR 9000 requires software IOS-XR 4.2.1 and the nV Cluster License (A9K-NV-CLUSTR-LIC).

Each physical ASR 9000 requires the RSP440 and the “typhoon” line cards – the SIP-700 and Enhanced Ethernet line cards.

 Step 2: Configure the nV Edge system itself, IE the nV Cluster.  Here’s the link to show how to get that done.

You’ll see in the configuration guide that the physical chassis are referred to as “rack 0” and “rack 1” (new terminology alert 🙂 ).

Once rack 0 and rack 1 are configured and connected (see figure above), they form the “cluster.”

Step 3: Configure the network components.

The ASR 9000 is configured with a bundle-interface from the ASR 9000 cluster to the  downlinks into the Data Center (IE an  Nexus 7000).  The ASR 9000 is also configure with a VPLS PW from the cluster towards the PW endpoint.

What you end up with is a vPC from the Nexus 7000 to the ASR 9000  cluster.  From the Nexus 7000 perspective, the vPC is not split between two physical ASR 9000 routers, but rather one (virtual) ASR 9000  so you get the benefit of having both physical links being utilized (as opposed to the MC-LAG solution shown above). Additionally,  you only need to configure 1 VPLS Pseudowire  instead of 4.

Ultimately, the  nV Edge clustering+ VPLS topology can be as simple as shown below, while maintaining your resiliency requirements.



The benefit of an nV Edge + VPLS and this is why it can be awesome…

1)        Increased physical link utilization

2)        Up to 75% decrease in VPLS Pseudowire requirements

3)        50 millisecond or less recovery times

Check it out, talk with your local account team or partner and thanks for reading..

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.