Cisco Logo


Data Center and Cloud

So, you have moved your server virtualization pilots into limited production and are in the process of scaling this out such that, at some point, it will move from limited production to a massive rollout across the entire enterprise. Or maybe you are already there. In any case, the time has probably come, or will soon, when you realize that doing VMotion or DRS or otherwise moving workloads across physical servers in the Data Center (DC) is extremely useful-but wouldn’t it be cool to be able to do that across servers in different DCs. There are many reasons this becomes interesting, now and in the future. Maybe power is cheaper in DC B than in DC A. Maybe there are no more available resources for that next workload in DC A, but there are in DC B. Maybe if you move workload from A to B you can shut down servers, heck maybe entire racks, in A for significant segments of time. You get the point -there are and will be compelling reasons to want to move workloads between DCs .Great, but how do you do it? Typically, DCs are connected via routers, implicitly stating that there is a Layer 3 boundary between them. If you are reading this, I’m going to assume you have some involvement in running a network, which means you probably don’t have a lot of spare time, which means you probably are not real interested in readdressing lots of IP devices. Thus, we have a bit of an issue. This issue is handled by extending the Layer 2 domain across DCs . There are several different ways to accomplish this. Before we discuss the mechanism by which we can transport that L2 domain across DCs, I want to first mention that this whole DC Interconnect topic is about much more than just transport. When a given workload moved from DC A to DC B what implications does that have for its storage target? If a user collocated to DC A has her application dynamically moved to DC B, she will probably never know if DC B is connected via 10GbE over dark fiber. But if DC B is far away over slow links, she could have an issue. My point here is that incorporating intelligent Fiber Channel switching such that the SAN is aware of VM activity is important. Incorporating WAAS solutions as part of the overall design (particularly in scenarios like that above) are important. Lots to consider as we move forward with these designs.Before we go into any detail on those topics we need to think through the basic transport between the DCs, which can be dumped into 1 of 3 different buckets:1) Dark Fiber2) Vanilla IP3) MPLSI don’t have time to go into all of these in detail now, but let’s consider Dark Fiber, the option that provides us with the most flexibility and options. Before proceeding, however, we should stop to recognize that in extending the L2 domain between sites we are, theoretically undoing what we have spent the past couple decades building. Many years ago at my first networking job (yes, I was once a customer, too), I remember broadcast storms emanating from one of our Midwest sites taking down a West Coast site. Not too long after this happened a few times, we decided introducing L3 between sites might be a good idea! So, this notion of extending L2 seems to be a reversion to the dark ages of networking in that Spanning Tree architectures become complex when the diameter/topology increases, Spanning Tree convergence or failure within a DC affects all other DC’s connected at L2, Spanning Tree will become fragile during link flaps, etc., etc. etc. STOP the madness! Craig -what in the name of all that is sacred in networking are you suggesting this for?! Well, luckily we have progressed a bit since Radia Perlman did her thing and 802.1d was blessed. If you have Catalyst 6500′s or Nexus 7000′s interconnecting your DC’s you can use Virtual Switching System (VSS) or virtual port channels (vPC’s). VSSbasically aggregates 2 Catalyst 6500 switches into a single virtual Switch. So you have a single logical switch running across 2 physical DCs -pretty cool, and a radically different scenario than just schlepping Spanning Tree across them-simpler, more scalable, more resilient. A (vPC) allows links that are physically connected to two different Nexus 7000′s to appear as a single port channel by a third downstream device (e.g. another switch or even a server). Among other things, vPCs increase usable bandwidth by allowing traffic to flow over the otherwise blocked ports/links in a standard dual-homed STP design. In any event, VSS and vPC’s are clearly simpler than running VPLS or MPLS, though these are certainly viable options as well. If you don’t have the option of dark fiber, and your SP is already providing MPLS between sites, than this is an obvious scenario you would want to use MPLS.Are you already doing DC Interconnect? What are your experiences? What are the big wins and what are the pain points? Thanks for whatever you can share with the rest of the readers-

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

2 Comments.


  1. In fact the above is an alternative solution to what I have been considering for a move of Data Center and in campus clients, from one location to another. The original approach consisted of re-addressing groups of clients and DC-based systems, by function, for ease of staged move, across L3 boundaries. The DCs inter-connect option sounds great, the only question being whether a MAN-like connection (OPT-E-MAN) could offer the infrastructure necessary to sustain it for the duration of the move.

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. All Security
  3. Return to Home